Clasificación Aviar en Ibagué mediante Deep Learning: Enfoque en Precisión y Eficiencia¶
El proceso de clasificación aviar en Ibagué, mediante técnicas de Deep Learning, reviste una importancia fundamental en la conservación y el estudio de la biodiversidad de la región. Ibagué, ubicada en una zona geográfica rica en diversidad de aves, se enfrenta a desafíos significativos en la identificación y monitoreo de estas especies. La implementación de un sistema multiclase de identificación aviar no solo facilita la tarea de catalogación para investigadores y conservacionistas, sino que también proporciona una herramienta eficaz para la evaluación del estado de las poblaciones aviares y la detección temprana de posibles amenazas, como la pérdida de hábitat o el cambio climático. Con un enfoque en la precisión y eficiencia del proceso de clasificación, este sistema puede contribuir de manera significativa a la gestión sostenible de los recursos naturales y la protección del medio ambiente en la región de Ibagué.
Librerias¶
A continuación se relacionana algunas de las bibliotecas y módulos de Python que se utilizan comúnmente en el desarrollo de aplicaciones de aprendizaje automático, procesamiento de imágenes y que se utilizaran en este proyecto:
Keras: Es una biblioteca de redes neuronales de código abierto escrita en Python que facilita la creación y entrenamiento de modelos de aprendizaje profundo. Proporciona una interfaz simple y consistente para construir y entrenar modelos de redes neuronales.
NumPy: Es una biblioteca fundamental para computación numérica en Python. Se utiliza para realizar operaciones matemáticas en matrices y matrices multidimensionales, lo que es esencial para el procesamiento de datos en el aprendizaje automático.
OpenCV (cv2): OpenCV (Open Source Computer Vision Library) es una biblioteca de código abierto que se utiliza para el procesamiento de imágenes y visión por computadora. Proporciona una amplia gama de funciones y algoritmos para tareas como manipulación de imágenes, detección de características, reconocimiento de objetos, seguimiento de objetos, entre otros.
Matplotlib: Es una biblioteca de trazado en 2D de Python que produce figuras de calidad de publicación en una variedad de formatos y entornos. Se utiliza para visualizar datos y resultados, incluidas imágenes y gráficos.
TensorFlow: Es una biblioteca de código abierto desarrollada por Google para el aprendizaje automático y la inteligencia artificial. TensorFlow proporciona un ecosistema completo para construir y entrenar modelos de aprendizaje profundo, incluidas API de alto nivel como Keras.
Scikit-learn: Es una biblioteca de aprendizaje automático de código abierto que proporciona herramientas simples y eficientes para el análisis predictivo y la minería de datos. Incluye una variedad de algoritmos de aprendizaje supervisado y no supervisado, así como herramientas para preprocesamiento de datos, evaluación de modelos y selección de características.
En resumen, estas bibliotecas son componentes clave en el desarrollo de aplicaciones de aprendizaje automático y procesamiento de imágenes, proporcionando herramientas y funciones para construir, entrenar, evaluar y visualizar modelos de manera eficiente.
from keras.models import Sequential, Model
from keras.layers import Conv2D, MaxPool2D, Dense, Flatten, Dropout, BatchNormalization, Input
from keras.optimizers import Adam
from keras.callbacks import TensorBoard, ModelCheckpoint
from keras.utils import to_categorical
import os
import numpy as np
from keras.preprocessing import image
from keras.applications.imagenet_utils import preprocess_input, decode_predictions
from keras.applications.vgg16 import VGG16
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import load_model
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
Configuración de parámetros¶
Es crucial establecer los parámetros específicos, como el tamaño de las imágenes de entrada (width_shape y height_shape), el número de clases a predecir (num_classes), así como los hiperparámetros de entrenamiento, como el número de épocas (epochs) y el tamaño del lote (batch_size), antes de iniciar el proceso de entrenamiento de un modelo de aprendizaje profundo para la clasificación de imágenes. Estos parámetros tienen un impacto significativo en la calidad y eficiencia del modelo resultante. Por ejemplo, el tamaño de las imágenes de entrada afecta la capacidad del modelo para capturar detalles importantes de las imágenes, mientras que el número de clases determina la complejidad de la tarea de clasificación. Además, la configuración adecuada de los hiperparámetros de entrenamiento, como el número de épocas y el tamaño del lote, es esencial para garantizar un entrenamiento efectivo y eficiente del modelo, evitando el sobreajuste o el subajuste. Por lo tanto, la correcta especificación de estos parámetros es fundamental para lograr resultados precisos y confiables en el proceso de clasificación de imágenes.
width_shape = 224
height_shape = 224
num_classes = 18
epochs = 50
batch_size = 32
Este código define algunas variables que son comunes al entrenar modelos de aprendizaje profundo para clasificación de imágenes:
width_shapeyheight_shape: Estas variables representan el ancho y la altura de las imágenes de entrada al modelo. En este caso, las imágenes se están redimensionando a una forma cuadrada de 224x224 píxeles. Es común redimensionar las imágenes de entrada a un tamaño específico antes de alimentarlas al modelo.num_classes: Esta variable indica el número de clases en el problema de clasificación. En este caso, el modelo se entrenará para clasificar imágenes en 10 clases diferentes.epochs: Este parámetro especifica el número de épocas o iteraciones completas sobre el conjunto de datos durante el entrenamiento del modelo. Cada época implica pasar por todo el conjunto de datos una vez hacia adelante y hacia atrás a través de la red neuronal.batch_size: Este parámetro indica el tamaño del lote de datos que se utilizará en cada iteración de entrenamiento. Durante el entrenamiento, los datos se dividen en lotes y se procesan en paralelo, lo que puede acelerar el proceso de entrenamiento y hacerlo más eficiente en términos de uso de memoria. Un valor típico parabatch_sizees 32, lo que significa que se procesarán 32 imágenes a la vez durante cada iteración de entrenamiento.
Path de dataset¶
Estos son los directorios donde se almacenan los datos de entrenamiento y validación respectivamente para el modelo. Es una práctica común dividir el conjunto de datos en un conjunto de entrenamiento y un conjunto de validación para evaluar el rendimiento del modelo durante el entrenamiento.
train_data_dir = 'dataset/train'
validation_data_dir = 'dataset/valid'
train_data_dir: Este directorio contiene las imágenes de entrenamiento. Por lo general, se espera que contenga subdirectorios separados para cada clase (cada tipo de ave) de imagen, donde las imágenes de cada clase se agrupan juntas.validation_data_dir: Este directorio contiene las imágenes de validación, que se utilizan para evaluar el rendimiento del modelo en un conjunto de datos independiente durante el entrenamiento. Al igual que con el directorio de entrenamiento, se espera que este directorio contenga subdirectorios separados para cada clase de imagen.
La estructura de directorios esperada por los generadores de datos de Keras o TensorFlow para tareas de clasificación de imágenes suele ser la siguiente:
dataset/
├── train/
│ ├── class1/
│ │ ├── image1.jpg
│ │ ├── image2.jpg
│ │ └── ...
│ ├── class2/
│ │ ├── image1.jpg
│ │ ├── image2.jpg
│ │ └── ...
│ └── ...
└── valid/
├── class1/
│ ├── image1.jpg
│ ├── image2.jpg
│ └── ...
├── class2/
│ ├── image1.jpg
│ ├── image2.jpg
│ └── ...
└── ...
En esta estructura:
- El directorio
traincontiene subdirectorios separados para cada clase de imagen. - Cada subdirectorio de clase (
class1,class2, etc.) contiene las imágenes correspondientes a esa clase. - El directorio
validtiene una estructura similar al directorio de entrenamiento y se utiliza para almacenar las imágenes de validación.
Es importante seguir esta estructura de directorios para que los generadores de datos de Keras o TensorFlow puedan encontrar y cargar correctamente las imágenes durante el entrenamiento y la validación del modelo. Si los datos no están organizados de esta manera, los generadores de datos pueden tener dificultades para encontrar las imágenes y el proceso de entrenamiento puede fallar o producir resultados incorrectos.
Generador de imágenes para las imagenes de train y valid¶
Los generadores de imágenes son fundamentales para cargar y procesar los datos de entrenamiento y validación de manera eficiente durante el entrenamiento del modelo de clasificación de imágenes. Además, aplican aumentos de datos en tiempo real durante el entrenamiento, lo que ayuda a mejorar la capacidad del modelo para generalizar a nuevos datos.
# Definir el generador de imágenes para el conjunto de entrenamiento con aumentos de datos
train_datagen = ImageDataGenerator(
rotation_range=20, # Rango de grados para rotación aleatoria
zoom_range=0.2, # Rango de zoom aleatorio
width_shift_range=0.1, # Rango de desplazamiento horizontal aleatorio
height_shift_range=0.1, # Rango de desplazamiento vertical aleatorio
horizontal_flip=True, # Volteo horizontal aleatorio
vertical_flip=False, # No se aplica volteo vertical
preprocessing_function=preprocess_input) # Función de preprocesamiento
# Definir el generador de imágenes para el conjunto de validación con los mismos aumentos de datos
valid_datagen = ImageDataGenerator(
rotation_range=20,
zoom_range=0.2,
width_shift_range=0.1,
height_shift_range=0.1,
horizontal_flip=True,
vertical_flip=False,
preprocessing_function=preprocess_input)
# Crear un generador de lotes de imágenes para el conjunto de entrenamiento
train_generator = train_datagen.flow_from_directory(
train_data_dir, # Directorio que contiene las imágenes de entrenamiento
target_size=(width_shape, height_shape), # Tamaño al que se redimensionarán las imágenes
batch_size=batch_size, # Tamaño del lote
class_mode='categorical') # Modo de clasificación para imágenes categóricas
# Crear un generador de lotes de imágenes para el conjunto de validación
validation_generator = valid_datagen.flow_from_directory(
validation_data_dir, # Directorio que contiene las imágenes de validación
target_size=(width_shape, height_shape), # Tamaño al que se redimensionarán las imágenes
batch_size=batch_size, # Tamaño del lote
class_mode='categorical') # Modo de clasificación para imágenes categóricas
Found 834 images belonging to 5 classes. Found 25 images belonging to 5 classes.
Entrenamiento de modelo VGG16¶
A continuación, se configura y entrena un modelo de red neuronal convolucional (CNN) utilizando la arquitectura VGG16 preentrenada para la clasificación de imágenes. Primero, se establecen variables para el número de muestras de entrenamiento y validación. Luego, se define la estructura de entrada de la red neuronal y se carga el modelo VGG16 preentrenado con sus pesos ajustados a partir del conjunto de datos ImageNet. Se añade una capa densa adicional para adaptar la salida a la cantidad de clases en el problema de clasificación y se congela el resto de las capas para evitar que se modifiquen durante el entrenamiento. Posteriormente, el modelo se compila con una función de pérdida y un optimizador específicos, y se muestra un resumen de la arquitectura del modelo. Finalmente, se lleva a cabo el entrenamiento del modelo utilizando generadores de datos para el conjunto de entrenamiento y validación, con la configuración de épocas y pasos definida previamente. Durante el entrenamiento, se actualizan únicamente los pesos de la capa densa añadida, mientras que los pesos de las capas preentrenadas permanecen constantes debido a la congelación previa.
Prueba 1¶
Regularización L2¶
Se agrega regularización L2 a la capa densa utilizando el parámetro kernel_regularizer='l2'. La regularización L2 es una técnica comúnmente utilizada para prevenir el sobreajuste en modelos de aprendizaje automático al penalizar los pesos grandes en la función de pérdida. Esto puede ayudar a mejorar la generalización del modelo, especialmente cuando se tienen conjuntos de datos pequeños o complejos.
Entrenamiento¶
# Definir el número de muestras de entrenamiento y validación
nb_train_samples = 2805
nb_validation_samples = 90
# Definir la entrada de la red neuronal con el tamaño de las imágenes
image_input = Input(shape=(width_shape, height_shape, 3))
# Cargar el modelo VGG16 preentrenado con pesos ajustados desde ImageNet
model = VGG16(input_tensor=image_input, include_top=True, weights='imagenet')
# Obtener la salida de la penúltima capa densa del modelo VGG16 (fc2)
last_layer = model.get_layer('fc2').output
# Añadir una nueva capa densa al final del modelo para la clasificación multiclase con regularización L2 (Evita sobreajuste)
out = Dense(num_classes, activation='softmax', kernel_regularizer='l2', name='output')(last_layer)
# Crear un nuevo modelo personalizado que toma la entrada de la imagen y produce la salida clasificada
custom_vgg_model = Model(image_input, out)
# Congelar todas las capas del modelo, excepto la capa densa añadida
for layer in custom_vgg_model.layers[:-1]:
layer.trainable = False
# Compilar el modelo con una función de pérdida, optimizador y métricas especificadas
custom_vgg_model.compile(loss='categorical_crossentropy', optimizer=Adam(learning_rate=0.0001), metrics=['accuracy'])
# Mostrar un resumen del modelo que incluye la arquitectura y el número de parámetros
custom_vgg_model.summary()
# Entrenar el modelo utilizando generadores de datos para el conjunto de entrenamiento y validación
model_history = custom_vgg_model.fit(
train_generator,
epochs=epochs,
validation_data=validation_generator,
steps_per_epoch=nb_train_samples//batch_size, # Número de pasos por época de entrenamiento
validation_steps=nb_validation_samples//batch_size) # Número de pasos por época de validación
Model: "functional_1"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ output (Dense) │ (None, 18) │ 73,746 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 134,334,290 (512.44 MB)
Trainable params: 73,746 (288.07 KB)
Non-trainable params: 134,260,544 (512.16 MB)
Epoch 1/50
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\keras\src\trainers\data_adapters\py_dataset_adapter.py:120: UserWarning: Your `PyDataset` class should call `super().__init__(**kwargs)` in its constructor. `**kwargs` can include `workers`, `use_multiprocessing`, `max_queue_size`. Do not pass these arguments to `fit()`, as they will be ignored. self._warn_if_super_not_called()
87/87 ━━━━━━━━━━━━━━━━━━━━ 327s 4s/step - accuracy: 0.2574 - loss: 3.3109 - val_accuracy: 0.6719 - val_loss: 1.3040 Epoch 2/50 1/87 ━━━━━━━━━━━━━━━━━━━━ 4:41 3s/step - accuracy: 0.7500 - loss: 1.1916
C:\Users\Oscar Diaz\anaconda3\Lib\contextlib.py:158: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset. self.gen.throw(typ, value, traceback)
87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 34ms/step - accuracy: 0.7500 - loss: 1.1916 - val_accuracy: 0.8077 - val_loss: 1.1709 Epoch 3/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 315s 4s/step - accuracy: 0.8064 - loss: 1.0603 - val_accuracy: 0.9219 - val_loss: 0.7147 Epoch 4/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 32ms/step - accuracy: 0.7812 - loss: 0.9347 - val_accuracy: 0.8462 - val_loss: 0.8326 Epoch 5/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 315s 4s/step - accuracy: 0.8887 - loss: 0.7424 - val_accuracy: 0.8750 - val_loss: 0.7001 Epoch 6/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 33ms/step - accuracy: 0.9688 - loss: 0.6136 - val_accuracy: 0.7692 - val_loss: 0.8264 Epoch 7/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 311s 4s/step - accuracy: 0.9222 - loss: 0.6504 - val_accuracy: 0.9844 - val_loss: 0.5047 Epoch 8/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 33ms/step - accuracy: 0.8750 - loss: 0.7784 - val_accuracy: 0.9231 - val_loss: 0.6585 Epoch 9/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 306s 3s/step - accuracy: 0.9399 - loss: 0.5582 - val_accuracy: 0.9219 - val_loss: 0.5437 Epoch 10/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 31ms/step - accuracy: 0.9375 - loss: 0.5939 - val_accuracy: 0.9615 - val_loss: 0.5021 Epoch 11/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 298s 3s/step - accuracy: 0.9530 - loss: 0.5215 - val_accuracy: 0.9062 - val_loss: 0.5525 Epoch 12/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 31ms/step - accuracy: 0.8750 - loss: 0.5664 - val_accuracy: 1.0000 - val_loss: 0.4143 Epoch 13/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 299s 3s/step - accuracy: 0.9540 - loss: 0.4771 - val_accuracy: 0.9688 - val_loss: 0.4520 Epoch 14/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 1.0000 - loss: 0.3820 - val_accuracy: 0.8846 - val_loss: 0.5025 Epoch 15/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 303s 3s/step - accuracy: 0.9615 - loss: 0.4615 - val_accuracy: 0.9531 - val_loss: 0.4085 Epoch 16/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.9375 - loss: 0.4576 - val_accuracy: 0.8846 - val_loss: 0.4775 Epoch 17/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 307s 4s/step - accuracy: 0.9693 - loss: 0.4188 - val_accuracy: 0.9375 - val_loss: 0.4464 Epoch 18/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 32ms/step - accuracy: 0.9688 - loss: 0.4436 - val_accuracy: 1.0000 - val_loss: 0.4011 Epoch 19/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 295s 3s/step - accuracy: 0.9695 - loss: 0.4057 - val_accuracy: 0.9062 - val_loss: 0.4726 Epoch 20/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 13s 121ms/step - accuracy: 0.9688 - loss: 0.4776 - val_accuracy: 0.9231 - val_loss: 0.4076 Epoch 21/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 304s 3s/step - accuracy: 0.9769 - loss: 0.3775 - val_accuracy: 0.9531 - val_loss: 0.3875 Epoch 22/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.9062 - loss: 0.4468 - val_accuracy: 0.9615 - val_loss: 0.3648 Epoch 23/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 307s 4s/step - accuracy: 0.9741 - loss: 0.3705 - val_accuracy: 0.9844 - val_loss: 0.3266 Epoch 24/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 31ms/step - accuracy: 0.9375 - loss: 0.4013 - val_accuracy: 0.9231 - val_loss: 0.4140 Epoch 25/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 298s 3s/step - accuracy: 0.9795 - loss: 0.3441 - val_accuracy: 0.9688 - val_loss: 0.3973 Epoch 26/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 1.0000 - loss: 0.2807 - val_accuracy: 1.0000 - val_loss: 0.2925 Epoch 27/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 296s 3s/step - accuracy: 0.9809 - loss: 0.3312 - val_accuracy: 0.9688 - val_loss: 0.3330 Epoch 28/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 36ms/step - accuracy: 1.0000 - loss: 0.2897 - val_accuracy: 0.9231 - val_loss: 0.4058 Epoch 29/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 298s 3s/step - accuracy: 0.9791 - loss: 0.3303 - val_accuracy: 0.9531 - val_loss: 0.3856 Epoch 30/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 1.0000 - loss: 0.3215 - val_accuracy: 0.9615 - val_loss: 0.3279 Epoch 31/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 297s 3s/step - accuracy: 0.9853 - loss: 0.3096 - val_accuracy: 0.9844 - val_loss: 0.3010 Epoch 32/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.9688 - loss: 0.3030 - val_accuracy: 0.9615 - val_loss: 0.4724 Epoch 33/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 292s 3s/step - accuracy: 0.9826 - loss: 0.3023 - val_accuracy: 0.9062 - val_loss: 0.3939 Epoch 34/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.9688 - loss: 0.3192 - val_accuracy: 1.0000 - val_loss: 0.2482 Epoch 35/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 292s 3s/step - accuracy: 0.9876 - loss: 0.2885 - val_accuracy: 0.9844 - val_loss: 0.2896 Epoch 36/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.9688 - loss: 0.2885 - val_accuracy: 1.0000 - val_loss: 0.2804 Epoch 37/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 299s 3s/step - accuracy: 0.9845 - loss: 0.2804 - val_accuracy: 0.9688 - val_loss: 0.2981 Epoch 38/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 1.0000 - loss: 0.2669 - val_accuracy: 0.9615 - val_loss: 0.2888 Epoch 39/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 295s 3s/step - accuracy: 0.9828 - loss: 0.2720 - val_accuracy: 0.9844 - val_loss: 0.2701 Epoch 40/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.9688 - loss: 0.2661 - val_accuracy: 1.0000 - val_loss: 0.2824 Epoch 41/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 293s 3s/step - accuracy: 0.9883 - loss: 0.2564 - val_accuracy: 0.9844 - val_loss: 0.2613 Epoch 42/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 29ms/step - accuracy: 1.0000 - loss: 0.2392 - val_accuracy: 0.9231 - val_loss: 0.4176 Epoch 43/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 295s 3s/step - accuracy: 0.9897 - loss: 0.2477 - val_accuracy: 0.9844 - val_loss: 0.2354 Epoch 44/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 1.0000 - loss: 0.2093 - val_accuracy: 0.9615 - val_loss: 0.3279 Epoch 45/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 291s 3s/step - accuracy: 0.9914 - loss: 0.2405 - val_accuracy: 0.9531 - val_loss: 0.2933 Epoch 46/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 29ms/step - accuracy: 1.0000 - loss: 0.2711 - val_accuracy: 1.0000 - val_loss: 0.1968 Epoch 47/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 291s 3s/step - accuracy: 0.9890 - loss: 0.2319 - val_accuracy: 0.9688 - val_loss: 0.2634 Epoch 48/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 1.0000 - loss: 0.1956 - val_accuracy: 1.0000 - val_loss: 0.2144 Epoch 49/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 295s 3s/step - accuracy: 0.9883 - loss: 0.2314 - val_accuracy: 0.9375 - val_loss: 0.3072 Epoch 50/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 1.0000 - loss: 0.1996 - val_accuracy: 1.0000 - val_loss: 0.1915
Grabar modelo en disco¶
import os
# Nombre base del modelo
model_name = "model_VGG16_v"
# Extensión del archivo
file_extension = ".keras"
# Directorio donde se guardarán los modelos
model_directory = "models/"
# Inicializar contador
counter = 1
# Generar el nombre completo del archivo
file_name = model_name + file_extension
ruta=model_directory + file_name
print(ruta)
# Verificar si el modelo ya está guardado
while os.path.exists(model_directory + file_name):
# Si el archivo existe, agregar un número al final del nombre del modelo
file_name = f"{model_name}{counter}{file_extension}"
counter += 1
# Guardar el modelo con el nombre único en el directorio correcto
custom_vgg_model.save(model_directory + file_name)
models/model_VGG16_v.keras
Gráficas de entrenamiento y validación (accuracy - loss)¶
Esta función plotTraining se encarga de visualizar la evolución de la pérdida (loss) y la precisión (accuracy) durante el entrenamiento y la validación de un modelo de redes neuronales a lo largo de las épocas.
def plotTraining(hist, epochs, typeData):
# Seleccionar la figura y establecer el tamaño
# Dependiendo del tipo de datos (loss o accuracy), se elige la figura correspondiente
if typeData=="loss":
plt.figure(1,figsize=(10,5))
yc=hist.history['loss']
xc=range(epochs)
plt.ylabel('Loss', fontsize=24)
plt.plot(xc,yc,'-r',label='Loss Training')
if typeData=="accuracy":
plt.figure(2,figsize=(10,5))
yc=hist.history['accuracy']
for i in range(0, len(yc)):
yc[i]=100*yc[i]
xc=range(epochs)
plt.ylabel('Accuracy (%)', fontsize=24)
plt.plot(xc,yc,'-r',label='Accuracy Training')
if typeData=="val_loss":
plt.figure(1,figsize=(10,5))
yc=hist.history['val_loss']
xc=range(epochs)
plt.ylabel('Loss', fontsize=24)
plt.plot(xc,yc,'--b',label='Loss Validate')
if typeData=="val_accuracy":
plt.figure(2,figsize=(10,5))
yc=hist.history['val_accuracy']
for i in range(0, len(yc)):
yc[i]=100*yc[i]
xc=range(epochs)
plt.ylabel('Accuracy (%)', fontsize=24)
plt.plot(xc,yc,'--b',label='Training Validate')
plt.rc('xtick',labelsize=24)
plt.rc('ytick',labelsize=24)
plt.rc('legend', fontsize=18)
plt.legend()
plt.xlabel('Number of Epochs',fontsize=24)
plt.grid(True)
plotTraining(model_history,epochs,"loss")
plotTraining(model_history,epochs,"accuracy")
plotTraining(model_history,epochs,"val_loss")
plotTraining(model_history,epochs,"val_accuracy")
Predicción usando el modelo entrenado¶
from keras.applications.imagenet_utils import preprocess_input, decode_predictions
from keras.models import load_model
names = ['CYANOCORAX YNCAS', 'PIRANGA RUBRA', 'PITANGUS SULPHURATUS', 'PYROCEPHALUS RUBINUS', 'RUPORNIS MAGNIROSTRIS', 'SICALIS FLAVEOLA', 'THRAUPIS EPISCOPUS', 'TIARIS OLIVACEUS', 'TYRANNUS MELANCHOLICUS', 'ZONOTRICHIA CAPENSIS', 'CATHARTES AURA', 'COEREBA FLAVEOLA', 'COLUMBA LIVIA', 'CORAGYPS ATRATUS', 'CROTOPHAGA SULCIROSTRIS', 'CYANOCORAX YNCAS', 'EGRETTA THULA', 'FALCO PEREGRINUS', 'FALCO SPARVERIUS', 'HIRUNDO RUSTICA', 'PANDION HALIAETUS', 'PILHERODIUS PILEATUS', 'PITANGUS SULPHURATUS', 'PYRRHOMYIAS CINNAMOMEUS', 'RYNCHOPS NIGER', 'SETOPHAGA FUSCA', 'SYNALLAXIS AZARAE', 'TYRANNUS MELANCHOLICUS']
# Cargar el modelo
modelt = load_model(model_directory + file_name)
#modelt = custom_vgg_model
# Ruta de la imagen de prueba
imaget_path = "ImagenPrueba.jpg"
# Leer la imagen, cambiar tamaño y preprocesar
imaget=cv2.resize(cv2.imread(imaget_path), (width_shape, height_shape), interpolation = cv2.INTER_AREA)
xt = np.asarray(imaget)
xt=preprocess_input(xt)
xt = np.expand_dims(xt,axis=0)
# Obtener las predicciones del modelo
preds = modelt.predict(xt)
# Obtener la clase predicha y su porcentaje de confianza
predicted_class_index = np.argmax(preds)
predicted_class_name = names[predicted_class_index]
confidence_percentage = preds[0][predicted_class_index] * 100
# Imprimir el resultado
print(f'Clase predicha: {predicted_class_name}')
print(f'Porcentaje de confianza: {confidence_percentage:.2f}%')
# Mostrar la imagen
plt.imshow(cv2.cvtColor(np.asarray(imaget), cv2.COLOR_BGR2RGB))
plt.axis('off')
plt.show()
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 785ms/step Clase predicha: EGRETTA THULA Porcentaje de confianza: 66.31%
Matriz de confusión y métricas de desempeño¶
from sklearn.metrics import confusion_matrix, f1_score, roc_curve, precision_score, recall_score, accuracy_score, roc_auc_score
from sklearn import metrics
from mlxtend.plotting import plot_confusion_matrix
from keras.models import load_model
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
names = ['CATHARTES AURA', 'COEREBA FLAVEOLA', 'COLUMBA LIVIA', 'CORAGYPS ATRATUS', 'CROTOPHAGA SULCIROSTRIS', 'CYANOCORAX YNCAS', 'EGRETTA THULA', 'FALCO PEREGRINUS', 'FALCO SPARVERIUS', 'HIRUNDO RUSTICA', 'PANDION HALIAETUS', 'PILHERODIUS PILEATUS', 'PITANGUS SULPHURATUS', 'PYRRHOMYIAS CINNAMOMEUS', 'RYNCHOPS NIGER', 'SETOPHAGA FUSCA', 'SYNALLAXIS AZARAE', 'TYRANNUS MELANCHOLICUS']
test_data_dir = 'dataset/test'
test_datagen = ImageDataGenerator()
test_generator = test_datagen.flow_from_directory(
test_data_dir,
target_size=(width_shape, height_shape),
batch_size = batch_size,
class_mode='categorical',
shuffle=False)
custom_Model= load_model(model_directory + file_name)
predictions = custom_Model.predict(test_generator)
y_pred = np.argmax(predictions, axis=1)
y_real = test_generator.classes
matc=confusion_matrix(y_real, y_pred)
print(metrics.classification_report(y_real,y_pred, digits = 4))
Found 85 images belonging to 18 classes.
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\keras\src\trainers\data_adapters\py_dataset_adapter.py:120: UserWarning: Your `PyDataset` class should call `super().__init__(**kwargs)` in its constructor. `**kwargs` can include `workers`, `use_multiprocessing`, `max_queue_size`. Do not pass these arguments to `fit()`, as they will be ignored. self._warn_if_super_not_called()
3/3 ━━━━━━━━━━━━━━━━━━━━ 9s 3s/step precision recall f1-score support 0 1.0000 0.8000 0.8889 5 1 0.6667 0.8000 0.7273 5 2 0.8333 1.0000 0.9091 5 3 1.0000 0.8000 0.8889 5 4 0.5556 1.0000 0.7143 5 5 0.8333 1.0000 0.9091 5 6 0.5000 1.0000 0.6667 5 7 0.5556 1.0000 0.7143 5 8 0.8000 0.8000 0.8000 5 9 0.3333 0.8000 0.4706 5 10 1.0000 0.4000 0.5714 5 11 1.0000 0.6000 0.7500 5 12 0.0000 0.0000 0.0000 5 13 0.0000 0.0000 0.0000 5 14 1.0000 0.6000 0.7500 5 15 1.0000 0.4000 0.5714 5 16 0.0000 0.0000 0.0000 5 17 0.0000 0.0000 0.0000 0 accuracy 0.6471 85 macro avg 0.6154 0.6111 0.5740 85 weighted avg 0.6516 0.6471 0.6078 85
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\sklearn\metrics\_classification.py:1344: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result)) C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\sklearn\metrics\_classification.py:1344: UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result)) C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\sklearn\metrics\_classification.py:1344: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result)) C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\sklearn\metrics\_classification.py:1344: UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result)) C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\sklearn\metrics\_classification.py:1344: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result)) C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\sklearn\metrics\_classification.py:1344: UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result))
# Define el tamaño de la figura y los nombres de las clases
fig, ax = plot_confusion_matrix(conf_mat=matc, figsize=(18,18), class_names=names, show_normed=True)
# Ajusta el tamaño de las letras de los nombres de las clases
ax.set_xticklabels(names, fontsize=12) # Ajusta el tamaño de las letras en el eje x
ax.set_yticklabels(names, fontsize=12) # Ajusta el tamaño de las letras en el eje y
# Ajusta automáticamente el diseño de la figura
plt.tight_layout()
# Muestra la figura
plt.show()
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\mlxtend\plotting\plot_confusion_matrix.py:102: RuntimeWarning: invalid value encountered in divide
normed_conf_mat = conf_mat.astype("float") / total_samples
Prueba 2¶
Generador de imágenes para las imagenes de train y valid¶
# Definir el generador de imágenes para el conjunto de entrenamiento con aumentos de datos
train_datagen = ImageDataGenerator(
rotation_range=20,
zoom_range=0.2,
width_shift_range=0.1,
height_shift_range=0.1,
horizontal_flip=True,
vertical_flip=False,
preprocessing_function=preprocess_input
)
# Definir el generador de imágenes para el conjunto de validación con los mismos aumentos de datos
valid_datagen = ImageDataGenerator(
rotation_range=20,
zoom_range=0.2,
width_shift_range=0.1,
height_shift_range=0.1,
horizontal_flip=True,
vertical_flip=False,
preprocessing_function=preprocess_input
)
# Definir el número de conjuntos de entrenamiento y validación
num_splits = 3
# Dividir los datos en múltiples conjuntos de entrenamiento y validación
for i in range(num_splits):
# Crear un generador de lotes de imágenes para el conjunto de entrenamiento
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(width_shape, height_shape),
batch_size=batch_size,
class_mode='categorical'
)
# Crear un generador de lotes de imágenes para el conjunto de validación
validation_generator = valid_datagen.flow_from_directory(
validation_data_dir,
target_size=(width_shape, height_shape),
batch_size=batch_size,
class_mode='categorical'
)
Found 2805 images belonging to 18 classes. Found 90 images belonging to 18 classes. Found 2805 images belonging to 18 classes. Found 90 images belonging to 18 classes. Found 2805 images belonging to 18 classes. Found 90 images belonging to 18 classes.
Entrenamiento¶
from keras.callbacks import LearningRateScheduler
import math
# Definir el número de muestras de entrenamiento y validación
nb_train_samples = 2805
nb_validation_samples = 90
# Definir la entrada de la red neuronal con el tamaño de las imágenes
image_input = Input(shape=(width_shape, height_shape, 3))
# Cargar el modelo VGG16 preentrenado con pesos ajustados desde ImageNet
model = VGG16(input_tensor=image_input, include_top=True, weights='imagenet')
# Obtener la salida de la penúltima capa densa del modelo VGG16 (fc2)
last_layer = model.get_layer('fc2').output
# Añadir una nueva capa densa al final del modelo para la clasificación multiclase con regularización L2 (Evita sobreajuste)
out = Dense(num_classes, activation='softmax', kernel_regularizer='l2', name='output')(last_layer)
# Crear un nuevo modelo personalizado que toma la entrada de la imagen y produce la salida clasificada
custom_vgg_model = Model(image_input, out)
# Congelar todas las capas del modelo, excepto la capa densa añadida
for layer in custom_vgg_model.layers[:-1]:
layer.trainable = False
# Compilar el modelo con una función de pérdida, optimizador y métricas especificadas
custom_vgg_model.compile(loss='categorical_crossentropy', optimizer=Adam(learning_rate=0.0001), metrics=['accuracy'])
# Definir una función que reduzca la tasa de aprendizaje a la mitad cada 2 épocas
def lr_schedule(epoch):
initial_lr = 0.0001
drop = 0.5
epochs_drop = 2
lr = initial_lr * math.pow(drop, math.floor((1+epoch)/epochs_drop))
print("Learning rate set to:", lr)
return lr
# Crear un callback para ajustar dinámicamente la tasa de aprendizaje
lr_scheduler = LearningRateScheduler(lr_schedule)
# Mostrar un resumen del modelo que incluye la arquitectura y el número de parámetros
custom_vgg_model.summary()
# Entrenar el modelo utilizando generadores de datos para el conjunto de entrenamiento y validación
model_history = custom_vgg_model.fit(
train_generator,
epochs=epochs,
validation_data=validation_generator,
steps_per_epoch=nb_train_samples//batch_size, # Número de pasos por época de entrenamiento
validation_steps=nb_validation_samples//batch_size, # Número de pasos por época de validación
callbacks=[lr_scheduler])# Añadir el callback de la tasa de aprendizaje
Model: "functional_3"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_1 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ output (Dense) │ (None, 18) │ 73,746 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 134,334,290 (512.44 MB)
Trainable params: 73,746 (288.07 KB)
Non-trainable params: 134,260,544 (512.16 MB)
Learning rate set to: 0.0001 Epoch 1/50
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\keras\src\trainers\data_adapters\py_dataset_adapter.py:120: UserWarning: Your `PyDataset` class should call `super().__init__(**kwargs)` in its constructor. `**kwargs` can include `workers`, `use_multiprocessing`, `max_queue_size`. Do not pass these arguments to `fit()`, as they will be ignored. self._warn_if_super_not_called()
87/87 ━━━━━━━━━━━━━━━━━━━━ 320s 4s/step - accuracy: 0.2627 - loss: 2.9901 - val_accuracy: 0.8281 - val_loss: 0.9816 - learning_rate: 1.0000e-04 Learning rate set to: 5e-05 Epoch 2/50 1/87 ━━━━━━━━━━━━━━━━━━━━ 5:18 4s/step - accuracy: 0.6875 - loss: 1.3955
C:\Users\Oscar Diaz\anaconda3\Lib\contextlib.py:158: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset. self.gen.throw(typ, value, traceback)
87/87 ━━━━━━━━━━━━━━━━━━━━ 7s 39ms/step - accuracy: 0.6875 - loss: 1.3955 - val_accuracy: 0.6154 - val_loss: 1.3046 - learning_rate: 5.0000e-05 Learning rate set to: 5e-05 Epoch 3/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 314s 4s/step - accuracy: 0.7854 - loss: 1.0984 - val_accuracy: 0.8594 - val_loss: 0.8841 - learning_rate: 5.0000e-05 Learning rate set to: 2.5e-05 Epoch 4/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.7188 - loss: 0.9660 - val_accuracy: 0.9231 - val_loss: 0.7396 - learning_rate: 2.5000e-05 Learning rate set to: 2.5e-05 Epoch 5/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 306s 3s/step - accuracy: 0.8557 - loss: 0.8725 - val_accuracy: 0.8438 - val_loss: 0.8213 - learning_rate: 2.5000e-05 Learning rate set to: 1.25e-05 Epoch 6/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 31ms/step - accuracy: 0.9062 - loss: 0.6931 - val_accuracy: 0.8846 - val_loss: 0.7597 - learning_rate: 1.2500e-05 Learning rate set to: 1.25e-05 Epoch 7/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 297s 3s/step - accuracy: 0.8782 - loss: 0.8099 - val_accuracy: 0.9219 - val_loss: 0.6788 - learning_rate: 1.2500e-05 Learning rate set to: 6.25e-06 Epoch 8/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.8125 - loss: 0.8384 - val_accuracy: 0.8077 - val_loss: 0.8301 - learning_rate: 6.2500e-06 Learning rate set to: 6.25e-06 Epoch 9/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 297s 3s/step - accuracy: 0.8746 - loss: 0.7692 - val_accuracy: 0.8438 - val_loss: 0.7895 - learning_rate: 6.2500e-06 Learning rate set to: 3.125e-06 Epoch 10/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.9375 - loss: 0.5706 - val_accuracy: 0.9231 - val_loss: 0.7769 - learning_rate: 3.1250e-06 Learning rate set to: 3.125e-06 Epoch 11/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 296s 3s/step - accuracy: 0.8766 - loss: 0.7774 - val_accuracy: 0.9062 - val_loss: 0.6919 - learning_rate: 3.1250e-06 Learning rate set to: 1.5625e-06 Epoch 12/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 32ms/step - accuracy: 0.8750 - loss: 0.7578 - val_accuracy: 0.9231 - val_loss: 0.7050 - learning_rate: 1.5625e-06 Learning rate set to: 1.5625e-06 Epoch 13/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 296s 3s/step - accuracy: 0.8920 - loss: 0.7599 - val_accuracy: 0.8594 - val_loss: 0.7321 - learning_rate: 1.5625e-06 Learning rate set to: 7.8125e-07 Epoch 14/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.8750 - loss: 0.8360 - val_accuracy: 0.8462 - val_loss: 0.9521 - learning_rate: 7.8125e-07 Learning rate set to: 7.8125e-07 Epoch 15/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 306s 3s/step - accuracy: 0.8925 - loss: 0.7552 - val_accuracy: 0.9375 - val_loss: 0.6446 - learning_rate: 7.8125e-07 Learning rate set to: 3.90625e-07 Epoch 16/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 29ms/step - accuracy: 0.9062 - loss: 0.6771 - val_accuracy: 0.8846 - val_loss: 0.7051 - learning_rate: 3.9062e-07 Learning rate set to: 3.90625e-07 Epoch 17/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 297s 3s/step - accuracy: 0.8914 - loss: 0.7439 - val_accuracy: 0.9062 - val_loss: 0.7160 - learning_rate: 3.9062e-07 Learning rate set to: 1.953125e-07 Epoch 18/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.9062 - loss: 0.7391 - val_accuracy: 0.9231 - val_loss: 0.7109 - learning_rate: 1.9531e-07 Learning rate set to: 1.953125e-07 Epoch 19/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 296s 3s/step - accuracy: 0.8956 - loss: 0.7447 - val_accuracy: 0.9375 - val_loss: 0.7051 - learning_rate: 1.9531e-07 Learning rate set to: 9.765625e-08 Epoch 20/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 31ms/step - accuracy: 0.8750 - loss: 0.7531 - val_accuracy: 0.7308 - val_loss: 0.7895 - learning_rate: 9.7656e-08 Learning rate set to: 9.765625e-08 Epoch 21/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 302s 3s/step - accuracy: 0.8937 - loss: 0.7510 - val_accuracy: 0.9375 - val_loss: 0.6711 - learning_rate: 9.7656e-08 Learning rate set to: 4.8828125e-08 Epoch 22/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 31ms/step - accuracy: 0.9688 - loss: 0.6780 - val_accuracy: 0.9615 - val_loss: 0.6467 - learning_rate: 4.8828e-08 Learning rate set to: 4.8828125e-08 Epoch 23/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 297s 3s/step - accuracy: 0.8882 - loss: 0.7567 - val_accuracy: 0.8750 - val_loss: 0.6856 - learning_rate: 4.8828e-08 Learning rate set to: 2.44140625e-08 Epoch 24/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.7500 - loss: 0.9866 - val_accuracy: 0.8462 - val_loss: 0.8062 - learning_rate: 2.4414e-08 Learning rate set to: 2.44140625e-08 Epoch 25/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 295s 3s/step - accuracy: 0.8869 - loss: 0.7534 - val_accuracy: 0.9219 - val_loss: 0.6431 - learning_rate: 2.4414e-08 Learning rate set to: 1.220703125e-08 Epoch 26/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.9375 - loss: 0.6891 - val_accuracy: 0.8462 - val_loss: 0.7317 - learning_rate: 1.2207e-08 Learning rate set to: 1.220703125e-08 Epoch 27/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 296s 3s/step - accuracy: 0.8814 - loss: 0.7565 - val_accuracy: 0.9062 - val_loss: 0.7840 - learning_rate: 1.2207e-08 Learning rate set to: 6.103515625e-09 Epoch 28/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 7s 32ms/step - accuracy: 0.9375 - loss: 0.6982 - val_accuracy: 0.8846 - val_loss: 0.6782 - learning_rate: 6.1035e-09 Learning rate set to: 6.103515625e-09 Epoch 29/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 295s 3s/step - accuracy: 0.8971 - loss: 0.7462 - val_accuracy: 0.9375 - val_loss: 0.6778 - learning_rate: 6.1035e-09 Learning rate set to: 3.0517578125e-09 Epoch 30/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 7s 40ms/step - accuracy: 0.8750 - loss: 0.8374 - val_accuracy: 0.9615 - val_loss: 0.6405 - learning_rate: 3.0518e-09 Learning rate set to: 3.0517578125e-09 Epoch 31/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 298s 3s/step - accuracy: 0.9021 - loss: 0.7392 - val_accuracy: 0.9219 - val_loss: 0.7032 - learning_rate: 3.0518e-09 Learning rate set to: 1.52587890625e-09 Epoch 32/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 31ms/step - accuracy: 0.7812 - loss: 0.7991 - val_accuracy: 0.9615 - val_loss: 0.6517 - learning_rate: 1.5259e-09 Learning rate set to: 1.52587890625e-09 Epoch 33/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 297s 3s/step - accuracy: 0.8967 - loss: 0.7477 - val_accuracy: 0.8906 - val_loss: 0.7329 - learning_rate: 1.5259e-09 Learning rate set to: 7.62939453125e-10 Epoch 34/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 7s 38ms/step - accuracy: 0.9688 - loss: 0.6851 - val_accuracy: 0.8846 - val_loss: 0.8029 - learning_rate: 7.6294e-10 Learning rate set to: 7.62939453125e-10 Epoch 35/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 301s 3s/step - accuracy: 0.8865 - loss: 0.7543 - val_accuracy: 0.8750 - val_loss: 0.7026 - learning_rate: 7.6294e-10 Learning rate set to: 3.814697265625e-10 Epoch 36/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.8438 - loss: 0.7618 - val_accuracy: 1.0000 - val_loss: 0.5661 - learning_rate: 3.8147e-10 Learning rate set to: 3.814697265625e-10 Epoch 37/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 297s 3s/step - accuracy: 0.8912 - loss: 0.7499 - val_accuracy: 0.9219 - val_loss: 0.7116 - learning_rate: 3.8147e-10 Learning rate set to: 1.9073486328125e-10 Epoch 38/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 32ms/step - accuracy: 0.8438 - loss: 0.7884 - val_accuracy: 0.9615 - val_loss: 0.6267 - learning_rate: 1.9073e-10 Learning rate set to: 1.9073486328125e-10 Epoch 39/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 300s 3s/step - accuracy: 0.8936 - loss: 0.7434 - val_accuracy: 0.8594 - val_loss: 0.6922 - learning_rate: 1.9073e-10 Learning rate set to: 9.5367431640625e-11 Epoch 40/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.8750 - loss: 0.7543 - val_accuracy: 0.9231 - val_loss: 0.7359 - learning_rate: 9.5367e-11 Learning rate set to: 9.5367431640625e-11 Epoch 41/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 297s 3s/step - accuracy: 0.9073 - loss: 0.7348 - val_accuracy: 0.8594 - val_loss: 0.7475 - learning_rate: 9.5367e-11 Learning rate set to: 4.76837158203125e-11 Epoch 42/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.9062 - loss: 0.7780 - val_accuracy: 0.9231 - val_loss: 0.6189 - learning_rate: 4.7684e-11 Learning rate set to: 4.76837158203125e-11 Epoch 43/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 296s 3s/step - accuracy: 0.8979 - loss: 0.7500 - val_accuracy: 0.9531 - val_loss: 0.6084 - learning_rate: 4.7684e-11 Learning rate set to: 2.384185791015625e-11 Epoch 44/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 31ms/step - accuracy: 0.9062 - loss: 0.7015 - val_accuracy: 0.8077 - val_loss: 0.9279 - learning_rate: 2.3842e-11 Learning rate set to: 2.384185791015625e-11 Epoch 45/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 296s 3s/step - accuracy: 0.9023 - loss: 0.7475 - val_accuracy: 0.9062 - val_loss: 0.7125 - learning_rate: 2.3842e-11 Learning rate set to: 1.1920928955078126e-11 Epoch 46/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 31ms/step - accuracy: 0.8125 - loss: 0.9087 - val_accuracy: 0.8846 - val_loss: 0.8064 - learning_rate: 1.1921e-11 Learning rate set to: 1.1920928955078126e-11 Epoch 47/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 295s 3s/step - accuracy: 0.8690 - loss: 0.8052 - val_accuracy: 0.8750 - val_loss: 0.7635 - learning_rate: 1.1921e-11 Learning rate set to: 5.960464477539063e-12 Epoch 48/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.9688 - loss: 0.7053 - val_accuracy: 0.9615 - val_loss: 0.5546 - learning_rate: 5.9605e-12 Learning rate set to: 5.960464477539063e-12 Epoch 49/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 302s 3s/step - accuracy: 0.8988 - loss: 0.7302 - val_accuracy: 0.9531 - val_loss: 0.6477 - learning_rate: 5.9605e-12 Learning rate set to: 2.9802322387695314e-12 Epoch 50/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.9062 - loss: 0.9430 - val_accuracy: 0.9615 - val_loss: 0.6262 - learning_rate: 2.9802e-12
Este código realiza las siguientes operaciones en general:
Definición de variables: Se definen dos variables
nb_train_samplesynb_validation_samplespara indicar el número de muestras de entrenamiento y validación respectivamente. Estas variables se utilizan más adelante para configurar el entrenamiento del modelo.Definición de la entrada de la red neuronal: Se define la entrada de la red neuronal utilizando la clase
Inputde Keras. La forma de la entrada se especifica como(width_shape, height_shape, 3), lo que indica el tamaño de las imágenes de entrada y el número de canales de color (en este caso, 3 para imágenes RGB).Carga del modelo preentrenado VGG16: Se carga el modelo preentrenado VGG16 utilizando la función
VGG16de Keras. Se especifica el tensor de entrada de la red neuronal (input_tensor=image_input), se incluye la capa densa al final del modelo (include_top=True) y se cargan los pesos preentrenados en el conjunto de datos de ImageNet (weights='imagenet').Creación de un nuevo modelo personalizado: Se crea un nuevo modelo personalizado que incluye todas las capas de la VGG16 hasta la última capa densa (
fc2). Se agrega una capa densa adicional al final del modelo con un número de unidades igual al número de clases en el problema (num_classes). Esta capa tiene una función de activación softmax, lo que la convierte en una clasificación de salida.Congelación de capas: Se congela todas las capas del modelo preentrenado VGG16, excepto la última capa densa añadida. Esto se hace para que durante el entrenamiento solo se actualicen los pesos de la nueva capa densa.
Compilación del modelo: Se compila el modelo personalizado utilizando la función de pérdida
categorical_crossentropy, el optimizadoradadeltay se incluye la métrica de precisión (accuracy).Resumen del modelo: Se muestra un resumen del modelo, que incluye todas las capas y el número total de parámetros entrenables y no entrenables.
Entrenamiento del modelo: Se entrena el modelo utilizando los generadores de datos
train_generatoryvalidation_generatorque se definieron anteriormente. Se especifica el número de épocas (epochs) y el número de pasos por época (steps_per_epoch) y de validación (validation_steps). Durante el entrenamiento, se actualizan los pesos de la capa densa añadida mientras que los pesos de las capas de VGG16 se mantienen fijos debido a la congelación realizada anteriormente.
Grabar modelo en disco¶
Guardar el modelo tiene varias ventajas. En primer lugar, permite la reutilización futura, ya que puedes cargarlo y utilizarlo nuevamente sin necesidad de volver a entrenarlo desde cero, lo que resulta útil tanto para hacer predicciones en nuevos datos como para continuar el entrenamiento en una fecha posterior. Además, facilita la distribución y compartición del modelo con otros investigadores, colegas o clientes que puedan necesitar utilizarlo en sus propios proyectos. Por último, al guardar el modelo junto con su configuración y pesos entrenados, se asegura la reproducibilidad de los resultados, ya que otros investigadores pueden cargar el modelo exacto y obtener los mismos resultados que tú, lo que es fundamental para la validación y la comparación de resultados en investigación científica y desarrollo de modelos.
Este código utilizará un bucle while para verificar si el archivo con el nombre base del modelo ya existe. Si existe, agregará un número al final del nombre del archivo y verificará nuevamente. Esto continuará hasta que se encuentre un nombre de archivo único que no exista en el directorio. Una vez que se encuentra un nombre único, el modelo se guarda con ese nombre.
import os
# Nombre base del modelo
model_name = "model_VGG16_v"
# Extensión del archivo
file_extension = ".keras"
# Directorio donde se guardarán los modelos
model_directory = "models/"
# Inicializar contador
counter = 1
# Generar el nombre completo del archivo
file_name = model_name + file_extension
ruta=model_directory + file_name
print(ruta)
# Verificar si el modelo ya está guardado
while os.path.exists(model_directory + file_name):
# Si el archivo existe, agregar un número al final del nombre del modelo
file_name = f"{model_name}{counter}{file_extension}"
counter += 1
# Guardar el modelo con el nombre único en el directorio correcto
custom_vgg_model.save(model_directory + file_name)
models/model_VGG16_v.keras
Gráficas de entrenamiento y validación (accuracy - loss)¶
def plotTraining(hist, epochs, typeData):
if typeData=="loss":
plt.figure(1,figsize=(10,5))
yc=hist.history['loss']
xc=range(epochs)
plt.ylabel('Loss', fontsize=24)
plt.plot(xc,yc,'-r',label='Loss Training')
if typeData=="accuracy":
plt.figure(2,figsize=(10,5))
yc=hist.history['accuracy']
for i in range(0, len(yc)):
yc[i]=100*yc[i]
xc=range(epochs)
plt.ylabel('Accuracy (%)', fontsize=24)
plt.plot(xc,yc,'-r',label='Accuracy Training')
if typeData=="val_loss":
plt.figure(1,figsize=(10,5))
yc=hist.history['val_loss']
xc=range(epochs)
plt.ylabel('Loss', fontsize=24)
plt.plot(xc,yc,'--b',label='Loss Validate')
if typeData=="val_accuracy":
plt.figure(2,figsize=(10,5))
yc=hist.history['val_accuracy']
for i in range(0, len(yc)):
yc[i]=100*yc[i]
xc=range(epochs)
plt.ylabel('Accuracy (%)', fontsize=24)
plt.plot(xc,yc,'--b',label='Training Validate')
plt.rc('xtick',labelsize=24)
plt.rc('ytick',labelsize=24)
plt.rc('legend', fontsize=18)
plt.legend()
plt.xlabel('Number of Epochs',fontsize=24)
plt.grid(True)
plotTraining(model_history,epochs,"loss")
plotTraining(model_history,epochs,"accuracy")
plotTraining(model_history,epochs,"val_loss")
plotTraining(model_history,epochs,"val_accuracy")
Predicción usando el modelo entrenado¶
from keras.applications.imagenet_utils import preprocess_input, decode_predictions
from keras.models import load_model
names = ['CATHARTES AURA', 'COEREBA FLAVEOLA', 'COLUMBA LIVIA', 'CORAGYPS ATRATUS', 'CROTOPHAGA SULCIROSTRIS', 'CYANOCORAX YNCAS', 'EGRETTA THULA', 'FALCO PEREGRINUS', 'FALCO SPARVERIUS', 'HIRUNDO RUSTICA', 'PANDION HALIAETUS', 'PILHERODIUS PILEATUS', 'PITANGUS SULPHURATUS', 'PYRRHOMYIAS CINNAMOMEUS', 'RYNCHOPS NIGER', 'SETOPHAGA FUSCA', 'SYNALLAXIS AZARAE', 'TYRANNUS MELANCHOLICUS']
# Cargar el modelo
modelt = load_model(model_directory + file_name)
#modelt = custom_vgg_model
# Ruta de la imagen de prueba
imaget_path = "ImagenPrueba_SF.png"
# Leer la imagen, cambiar tamaño y preprocesar
imaget=cv2.resize(cv2.imread(imaget_path), (width_shape, height_shape), interpolation = cv2.INTER_AREA)
xt = np.asarray(imaget)
xt=preprocess_input(xt)
xt = np.expand_dims(xt,axis=0)
# Obtener las predicciones del modelo
preds = modelt.predict(xt)
# Obtener la clase predicha y su porcentaje de confianza
predicted_class_index = np.argmax(preds)
predicted_class_name = names[predicted_class_index]
confidence_percentage = preds[0][predicted_class_index] * 100
# Imprimir el resultado
print(f'Clase predicha: {predicted_class_name}')
print(f'Porcentaje de confianza: {confidence_percentage:.2f}%')
# Mostrar la imagen
plt.imshow(cv2.cvtColor(np.asarray(imaget), cv2.COLOR_BGR2RGB))
plt.axis('off')
plt.show()
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 656ms/step Clase predicha: CYANOCORAX YNCAS Porcentaje de confianza: 89.46%
Matriz de confusión y métricas de desempeño¶
from sklearn.metrics import confusion_matrix, f1_score, roc_curve, precision_score, recall_score, accuracy_score, roc_auc_score
from sklearn import metrics
from mlxtend.plotting import plot_confusion_matrix
from keras.models import load_model
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
names = ['CATHARTES AURA', 'COEREBA FLAVEOLA', 'COLUMBA LIVIA', 'CORAGYPS ATRATUS', 'CROTOPHAGA SULCIROSTRIS', 'CYANOCORAX YNCAS', 'EGRETTA THULA', 'FALCO PEREGRINUS', 'FALCO SPARVERIUS', 'HIRUNDO RUSTICA', 'PANDION HALIAETUS', 'PILHERODIUS PILEATUS', 'PITANGUS SULPHURATUS', 'PYRRHOMYIAS CINNAMOMEUS', 'RYNCHOPS NIGER', 'SETOPHAGA FUSCA', 'SYNALLAXIS AZARAE', 'TYRANNUS MELANCHOLICUS']
test_data_dir = 'dataset/test'
test_datagen = ImageDataGenerator()
test_generator = test_datagen.flow_from_directory(
test_data_dir,
target_size=(width_shape, height_shape),
batch_size = batch_size,
class_mode='categorical',
shuffle=False)
custom_Model= load_model(model_directory + file_name)
predictions = custom_Model.predict(test_generator)
y_pred = np.argmax(predictions, axis=1)
y_real = test_generator.classes
matc=confusion_matrix(y_real, y_pred)
plot_confusion_matrix(conf_mat=matc, figsize=(9,9), class_names = names, show_normed=False)
plt.tight_layout()
print(metrics.classification_report(y_real,y_pred, digits = 4))
Found 86 images belonging to 18 classes. 3/3 ━━━━━━━━━━━━━━━━━━━━ 9s 3s/step
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\mlxtend\plotting\plot_confusion_matrix.py:102: RuntimeWarning: invalid value encountered in divide
normed_conf_mat = conf_mat.astype("float") / total_samples
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\sklearn\metrics\_classification.py:1344: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\sklearn\metrics\_classification.py:1344: UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\sklearn\metrics\_classification.py:1344: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\sklearn\metrics\_classification.py:1344: UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\sklearn\metrics\_classification.py:1344: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\sklearn\metrics\_classification.py:1344: UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
precision recall f1-score support
0 1.0000 0.2000 0.3333 5
1 0.5714 0.8000 0.6667 5
2 0.8333 1.0000 0.9091 5
3 1.0000 1.0000 1.0000 5
4 0.6250 1.0000 0.7692 5
5 0.7500 1.0000 0.8571 6
6 0.4444 0.8000 0.5714 5
7 0.4167 1.0000 0.5882 5
8 1.0000 0.8000 0.8889 5
9 0.1250 0.2000 0.1538 5
10 0.5000 0.2000 0.2857 5
11 0.2500 0.2000 0.2222 5
12 0.0000 0.0000 0.0000 5
13 1.0000 0.4000 0.5714 5
14 1.0000 0.6000 0.7500 5
15 1.0000 0.2000 0.3333 5
16 0.0000 0.0000 0.0000 5
17 0.0000 0.0000 0.0000 0
accuracy 0.5581 86
macro avg 0.5842 0.5222 0.4945 86
weighted avg 0.6201 0.5581 0.5274 86
# Define el tamaño de la figura y los nombres de las clases
fig, ax = plot_confusion_matrix(conf_mat=matc, figsize=(9,9), class_names=names, show_normed=True)
# Ajusta el tamaño de las letras de los nombres de las clases
ax.set_xticklabels(names, fontsize=12) # Ajusta el tamaño de las letras en el eje x
ax.set_yticklabels(names, fontsize=12) # Ajusta el tamaño de las letras en el eje y
# Ajusta automáticamente el diseño de la figura
plt.tight_layout()
# Muestra la figura
plt.show()
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\mlxtend\plotting\plot_confusion_matrix.py:102: RuntimeWarning: invalid value encountered in divide
normed_conf_mat = conf_mat.astype("float") / total_samples
Prueba 3¶
Regularización L2¶
Entrenamiento¶
# Importar las funciones necesarias de Keras
from keras.layers import Input
from keras.applications import VGG16
# Definir la forma de entrada para las imágenes (ancho, alto, canales RGB)
image_input = Input(shape=(width_shape, height_shape, 3))
# Crear el modelo VGG16 utilizando la entrada de imagen definida
# include_top=True significa que se incluirán todas las capas densas en la parte superior del modelo
# weights='imagenet' significa que se utilizarán los pesos pre-entrenados en ImageNet
model2 = VGG16(input_tensor=image_input, include_top=True, weights='imagenet')
# Mostrar un resumen de la arquitectura del modelo
model2.summary()
Model: "vgg16"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_2 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,357,544 (527.79 MB)
Trainable params: 138,357,544 (527.79 MB)
Non-trainable params: 0 (0.00 B)
last_layer = model2.get_layer('block5_pool').output
x= Flatten(name='flatten')(last_layer)
x = Dense(128, activation='relu', name='fc1')(x)
x = Dense(128, activation='relu', name='fc2')(x)
out = Dense(num_classes, activation='softmax', name='output')(x)
custom_model = Model(image_input, out)
custom_model.summary()
# freeze all the layers except the dense layers
for layer in custom_model.layers[:-3]:
layer.trainable = False
custom_model.summary()
custom_model.compile(loss='categorical_crossentropy',optimizer='adadelta',metrics=['accuracy'])
Model: "functional_5"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_2 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 128) │ 3,211,392 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 128) │ 16,512 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ output (Dense) │ (None, 18) │ 2,322 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 17,944,914 (68.45 MB)
Trainable params: 17,944,914 (68.45 MB)
Non-trainable params: 0 (0.00 B)
Model: "functional_5"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_2 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 128) │ 3,211,392 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 128) │ 16,512 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ output (Dense) │ (None, 18) │ 2,322 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 17,944,914 (68.45 MB)
Trainable params: 3,230,226 (12.32 MB)
Non-trainable params: 14,714,688 (56.13 MB)
# Definir el número de muestras de entrenamiento y validación
nb_train_samples = 2805
nb_validation_samples = 90
model_history = custom_model.fit(
train_generator,
epochs=epochs,
validation_data=validation_generator,
steps_per_epoch=nb_train_samples//batch_size,
validation_steps=nb_validation_samples//batch_size)
Epoch 1/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 298s 3s/step - accuracy: 0.0663 - loss: 18.7191 - val_accuracy: 0.0312 - val_loss: 18.9593 Epoch 2/50 1/87 ━━━━━━━━━━━━━━━━━━━━ 4:15 3s/step - accuracy: 0.0625 - loss: 17.2532
C:\Users\Oscar Diaz\anaconda3\Lib\contextlib.py:158: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset. self.gen.throw(typ, value, traceback)
87/87 ━━━━━━━━━━━━━━━━━━━━ 5s 29ms/step - accuracy: 0.0625 - loss: 17.2532 - val_accuracy: 0.1154 - val_loss: 12.6727 Epoch 3/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 293s 3s/step - accuracy: 0.0679 - loss: 15.7419 - val_accuracy: 0.1562 - val_loss: 13.0466 Epoch 4/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 32ms/step - accuracy: 0.1250 - loss: 16.9887 - val_accuracy: 0.0769 - val_loss: 14.3917 Epoch 5/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 307s 4s/step - accuracy: 0.0892 - loss: 13.6572 - val_accuracy: 0.0938 - val_loss: 14.8485 Epoch 6/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 31ms/step - accuracy: 0.0312 - loss: 12.3036 - val_accuracy: 0.1538 - val_loss: 9.1232 Epoch 7/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 296s 3s/step - accuracy: 0.1002 - loss: 12.6286 - val_accuracy: 0.1406 - val_loss: 11.9384 Epoch 8/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.0625 - loss: 12.5009 - val_accuracy: 0.1538 - val_loss: 10.8247 Epoch 9/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 301s 3s/step - accuracy: 0.1010 - loss: 11.3096 - val_accuracy: 0.0938 - val_loss: 11.0147 Epoch 10/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 29ms/step - accuracy: 0.1250 - loss: 8.9476 - val_accuracy: 0.0769 - val_loss: 11.6915 Epoch 11/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 293s 3s/step - accuracy: 0.1191 - loss: 10.0668 - val_accuracy: 0.1719 - val_loss: 10.0200 Epoch 12/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.0625 - loss: 11.1884 - val_accuracy: 0.1154 - val_loss: 10.3086 Epoch 13/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 294s 3s/step - accuracy: 0.1092 - loss: 9.7454 - val_accuracy: 0.1250 - val_loss: 9.8759 Epoch 14/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 29ms/step - accuracy: 0.1250 - loss: 9.2793 - val_accuracy: 0.0000e+00 - val_loss: 12.3645 Epoch 15/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 293s 3s/step - accuracy: 0.1582 - loss: 9.0621 - val_accuracy: 0.1875 - val_loss: 8.1522 Epoch 16/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 29ms/step - accuracy: 0.1875 - loss: 9.3314 - val_accuracy: 0.0769 - val_loss: 11.2243 Epoch 17/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 293s 3s/step - accuracy: 0.1632 - loss: 8.6203 - val_accuracy: 0.2188 - val_loss: 7.7420 Epoch 18/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.0625 - loss: 8.3507 - val_accuracy: 0.1538 - val_loss: 6.6532 Epoch 19/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 293s 3s/step - accuracy: 0.1686 - loss: 8.0066 - val_accuracy: 0.0938 - val_loss: 9.3384 Epoch 20/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 29ms/step - accuracy: 0.1562 - loss: 6.8174 - val_accuracy: 0.3077 - val_loss: 7.0244 Epoch 21/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 294s 3s/step - accuracy: 0.1580 - loss: 7.6412 - val_accuracy: 0.1250 - val_loss: 8.8477 Epoch 22/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.2812 - loss: 6.1855 - val_accuracy: 0.1923 - val_loss: 7.5021 Epoch 23/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 295s 3s/step - accuracy: 0.2020 - loss: 6.8421 - val_accuracy: 0.2031 - val_loss: 7.6823 Epoch 24/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.2812 - loss: 6.8964 - val_accuracy: 0.1154 - val_loss: 7.3148 Epoch 25/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 300s 3s/step - accuracy: 0.2051 - loss: 6.7834 - val_accuracy: 0.1406 - val_loss: 7.8871 Epoch 26/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 29ms/step - accuracy: 0.2500 - loss: 6.7696 - val_accuracy: 0.1538 - val_loss: 7.4677 Epoch 27/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 295s 3s/step - accuracy: 0.1910 - loss: 6.4696 - val_accuracy: 0.3125 - val_loss: 5.7271 Epoch 28/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.2500 - loss: 6.1857 - val_accuracy: 0.1538 - val_loss: 7.6506 Epoch 29/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 294s 3s/step - accuracy: 0.1982 - loss: 6.3297 - val_accuracy: 0.1719 - val_loss: 6.8591 Epoch 30/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.2500 - loss: 5.6456 - val_accuracy: 0.3077 - val_loss: 4.8336 Epoch 31/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 297s 3s/step - accuracy: 0.2154 - loss: 5.9391 - val_accuracy: 0.1875 - val_loss: 6.4135 Epoch 32/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.2188 - loss: 5.3871 - val_accuracy: 0.1538 - val_loss: 5.8893 Epoch 33/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 293s 3s/step - accuracy: 0.2254 - loss: 5.7190 - val_accuracy: 0.2344 - val_loss: 5.3781 Epoch 34/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.1875 - loss: 6.2125 - val_accuracy: 0.3077 - val_loss: 5.1460 Epoch 35/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 294s 3s/step - accuracy: 0.2335 - loss: 5.5416 - val_accuracy: 0.1875 - val_loss: 5.7378 Epoch 36/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 32ms/step - accuracy: 0.1875 - loss: 5.5248 - val_accuracy: 0.3846 - val_loss: 7.0313 Epoch 37/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 295s 3s/step - accuracy: 0.2353 - loss: 5.3585 - val_accuracy: 0.1875 - val_loss: 7.0348 Epoch 38/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 31ms/step - accuracy: 0.3125 - loss: 5.2204 - val_accuracy: 0.1538 - val_loss: 6.5614 Epoch 39/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 294s 3s/step - accuracy: 0.2596 - loss: 5.1959 - val_accuracy: 0.2656 - val_loss: 5.9645 Epoch 40/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 31ms/step - accuracy: 0.3750 - loss: 3.3942 - val_accuracy: 0.1923 - val_loss: 4.2173 Epoch 41/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 301s 3s/step - accuracy: 0.2552 - loss: 4.8990 - val_accuracy: 0.2656 - val_loss: 4.7025 Epoch 42/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.2188 - loss: 5.3079 - val_accuracy: 0.3462 - val_loss: 4.1265 Epoch 43/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 295s 3s/step - accuracy: 0.2820 - loss: 4.8741 - val_accuracy: 0.2812 - val_loss: 4.8116 Epoch 44/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.2812 - loss: 5.4825 - val_accuracy: 0.2308 - val_loss: 4.4658 Epoch 45/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 295s 3s/step - accuracy: 0.2944 - loss: 4.4461 - val_accuracy: 0.3438 - val_loss: 4.4990 Epoch 46/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.2188 - loss: 4.8492 - val_accuracy: 0.3462 - val_loss: 4.6156 Epoch 47/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 296s 3s/step - accuracy: 0.2920 - loss: 4.5979 - val_accuracy: 0.2969 - val_loss: 5.1433 Epoch 48/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.6250 - loss: 2.6970 - val_accuracy: 0.2308 - val_loss: 5.2890 Epoch 49/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 293s 3s/step - accuracy: 0.3138 - loss: 4.2803 - val_accuracy: 0.2500 - val_loss: 4.7989 Epoch 50/50 87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 30ms/step - accuracy: 0.2812 - loss: 5.2754 - val_accuracy: 0.4231 - val_loss: 4.1803
Grabar modelo en disco¶
import os
# Nombre base del modelo
model_name = "model_VGG16_v"
# Extensión del archivo
file_extension = ".keras"
# Directorio donde se guardarán los modelos
model_directory = "models/"
# Inicializar contador
counter = 1
# Generar el nombre completo del archivo
file_name = model_name + file_extension
ruta=model_directory + file_name
print(ruta)
# Verificar si el modelo ya está guardado
while os.path.exists(model_directory + file_name):
# Si el archivo existe, agregar un número al final del nombre del modelo
file_name = f"{model_name}{counter}{file_extension}"
counter += 1
# Guardar el modelo con el nombre único en el directorio correcto
custom_vgg_model.save(model_directory + file_name)
models/model_VGG16_v.keras
Gráficas de entrenamiento y validación (accuracy - loss)¶
def plotTraining(hist, epochs, typeData):
if typeData=="loss":
plt.figure(1,figsize=(10,5))
yc=hist.history['loss']
xc=range(epochs)
plt.ylabel('Loss', fontsize=24)
plt.plot(xc,yc,'-r',label='Loss Training')
if typeData=="accuracy":
plt.figure(2,figsize=(10,5))
yc=hist.history['accuracy']
for i in range(0, len(yc)):
yc[i]=100*yc[i]
xc=range(epochs)
plt.ylabel('Accuracy (%)', fontsize=24)
plt.plot(xc,yc,'-r',label='Accuracy Training')
if typeData=="val_loss":
plt.figure(1,figsize=(10,5))
yc=hist.history['val_loss']
xc=range(epochs)
plt.ylabel('Loss', fontsize=24)
plt.plot(xc,yc,'--b',label='Loss Validate')
if typeData=="val_accuracy":
plt.figure(2,figsize=(10,5))
yc=hist.history['val_accuracy']
for i in range(0, len(yc)):
yc[i]=100*yc[i]
xc=range(epochs)
plt.ylabel('Accuracy (%)', fontsize=24)
plt.plot(xc,yc,'--b',label='Training Validate')
plt.rc('xtick',labelsize=24)
plt.rc('ytick',labelsize=24)
plt.rc('legend', fontsize=18)
plt.legend()
plt.xlabel('Number of Epochs',fontsize=24)
plt.grid(True)
plotTraining(model_history,epochs,"loss")
plotTraining(model_history,epochs,"accuracy")
plotTraining(model_history,epochs,"val_loss")
plotTraining(model_history,epochs,"val_accuracy")
Predicción usando el modelo entrenado¶
from keras.applications.imagenet_utils import preprocess_input, decode_predictions
from keras.models import load_model
names = ['CYANOCORAX YNCAS', 'PIRANGA RUBRA', 'PITANGUS SULPHURATUS', 'PYROCEPHALUS RUBINUS', 'RUPORNIS MAGNIROSTRIS', 'SICALIS FLAVEOLA', 'THRAUPIS EPISCOPUS', 'TIARIS OLIVACEUS', 'TYRANNUS MELANCHOLICUS', 'ZONOTRICHIA CAPENSIS', 'CATHARTES AURA', 'COEREBA FLAVEOLA', 'COLUMBA LIVIA', 'CORAGYPS ATRATUS', 'CROTOPHAGA SULCIROSTRIS', 'CYANOCORAX YNCAS', 'EGRETTA THULA', 'FALCO PEREGRINUS', 'FALCO SPARVERIUS', 'HIRUNDO RUSTICA', 'PANDION HALIAETUS', 'PILHERODIUS PILEATUS', 'PITANGUS SULPHURATUS', 'PYRRHOMYIAS CINNAMOMEUS', 'RYNCHOPS NIGER', 'SETOPHAGA FUSCA', 'SYNALLAXIS AZARAE', 'TYRANNUS MELANCHOLICUS']
# Cargar el modelo
modelt = load_model(model_directory + file_name)
#modelt = custom_vgg_model
# Ruta de la imagen de prueba
imaget_path = "ImagenPrueba_SF.png"
# Leer la imagen, cambiar tamaño y preprocesar
imaget=cv2.resize(cv2.imread(imaget_path), (width_shape, height_shape), interpolation = cv2.INTER_AREA)
xt = np.asarray(imaget)
xt=preprocess_input(xt)
xt = np.expand_dims(xt,axis=0)
# Obtener las predicciones del modelo
preds = modelt.predict(xt)
# Obtener la clase predicha y su porcentaje de confianza
predicted_class_index = np.argmax(preds)
predicted_class_name = names[predicted_class_index]
confidence_percentage = preds[0][predicted_class_index] * 100
# Imprimir el resultado
print(f'Clase predicha: {predicted_class_name}')
print(f'Porcentaje de confianza: {confidence_percentage:.2f}%')
# Mostrar la imagen
plt.imshow(cv2.cvtColor(np.asarray(imaget), cv2.COLOR_BGR2RGB))
plt.axis('off')
plt.show()
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 743ms/step Clase predicha: SICALIS FLAVEOLA Porcentaje de confianza: 89.46%
Matriz de confusión y métricas de desempeño¶
La validación oscila tanto porque podría haber un problema de sobreajuste (overfitting) o la necesidad de ajustar los hiperparámetros. El modelo parece estar aprendiendo bien durante el entrenamiento, como se muestra en la línea roja continua “Accuracy Training”, pero tiene un rendimiento inconsistente en los datos de validación, indicado por la línea azul punteada “Training Validate”.
Para abordar esto:
Regularización: Introduce técnicas de regularización como dropout o weight decay para reducir el sobreajuste. Ajuste de Hiperparámetros: Experimenta con diferentes valores para hiperparámetros como tasa de aprendizaje, número de capas ocultas, etc. Más Datos: Si es posible, aumenta el tamaño del conjunto de datos de entrenamiento. Early Stopping: Detén el entrenamiento cuando la precisión de validación deja de mejorar. El gráfico muestra que el modelo necesita más ajustes para generalizar mejor a datos no vistos
from sklearn.metrics import confusion_matrix, f1_score, roc_curve, precision_score, recall_score, accuracy_score, roc_auc_score
from sklearn import metrics
from mlxtend.plotting import plot_confusion_matrix
from keras.models import load_model
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
names = [ 'CATHARTES AURA', 'COEREBA FLAVEOLA', 'COLUMBA LIVIA', 'CORAGYPS ATRATUS', 'CROTOPHAGA SULCIROSTRIS', 'CYANOCORAX YNCAS', 'EGRETTA THULA', 'FALCO PEREGRINUS', 'FALCO SPARVERIUS', 'HIRUNDO RUSTICA', 'PANDION HALIAETUS', 'PILHERODIUS PILEATUS', 'PITANGUS SULPHURATUS', 'PYRRHOMYIAS CINNAMOMEUS', 'RYNCHOPS NIGER', 'SETOPHAGA FUSCA', 'SYNALLAXIS AZARAE', 'TYRANNUS MELANCHOLICUS']
test_data_dir = 'dataset/test'
test_datagen = ImageDataGenerator()
test_generator = test_datagen.flow_from_directory(
test_data_dir,
target_size=(width_shape, height_shape),
batch_size = batch_size,
class_mode='categorical',
shuffle=False)
custom_Model50= load_model(model_directory + file_name)
predictions = custom_Model50.predict(test_generator)
y_pred = np.argmax(predictions, axis=1)
y_real = test_generator.classes
matc=confusion_matrix(y_real, y_pred)
plot_confusion_matrix(conf_mat=matc, figsize=(9,9), class_names = names, show_normed=False)
plt.tight_layout()
print(metrics.classification_report(y_real,y_pred, digits = 4))
Found 89 images belonging to 18 classes.
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\keras\src\trainers\data_adapters\py_dataset_adapter.py:120: UserWarning: Your `PyDataset` class should call `super().__init__(**kwargs)` in its constructor. `**kwargs` can include `workers`, `use_multiprocessing`, `max_queue_size`. Do not pass these arguments to `fit()`, as they will be ignored. self._warn_if_super_not_called()
3/3 ━━━━━━━━━━━━━━━━━━━━ 13s 4s/step
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\mlxtend\plotting\plot_confusion_matrix.py:102: RuntimeWarning: invalid value encountered in divide
normed_conf_mat = conf_mat.astype("float") / total_samples
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\sklearn\metrics\_classification.py:1344: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\sklearn\metrics\_classification.py:1344: UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\sklearn\metrics\_classification.py:1344: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\sklearn\metrics\_classification.py:1344: UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\sklearn\metrics\_classification.py:1344: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\sklearn\metrics\_classification.py:1344: UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
precision recall f1-score support
0 1.0000 0.2000 0.3333 5
1 0.5714 0.8000 0.6667 5
2 0.8333 1.0000 0.9091 5
3 1.0000 1.0000 1.0000 5
4 0.6250 1.0000 0.7692 5
5 0.7500 1.0000 0.8571 6
6 0.5833 0.8750 0.7000 8
7 0.4167 1.0000 0.5882 5
8 1.0000 0.8000 0.8889 5
9 0.1250 0.2000 0.1538 5
10 0.5000 0.2000 0.2857 5
11 0.2500 0.2000 0.2222 5
12 0.0000 0.0000 0.0000 5
13 1.0000 0.4000 0.5714 5
14 1.0000 0.6000 0.7500 5
15 1.0000 0.2000 0.3333 5
16 0.0000 0.0000 0.0000 5
17 0.0000 0.0000 0.0000 0
accuracy 0.5730 89
macro avg 0.5919 0.5264 0.5016 89
weighted avg 0.6267 0.5730 0.5405 89
Prueba 4¶
width_shape = 224
height_shape = 224
num_classes = 5
epochs = 50
batch_size = 32
train_data_dir = 'dataset/train'
validation_data_dir = 'dataset/valid'
# Definir el generador de imágenes para el conjunto de entrenamiento con aumentos de datos
train_datagen = ImageDataGenerator(
rotation_range=20, # Rango de grados para rotación aleatoria
zoom_range=0.2, # Rango de zoom aleatorio
width_shift_range=0.1, # Rango de desplazamiento horizontal aleatorio
height_shift_range=0.1, # Rango de desplazamiento vertical aleatorio
horizontal_flip=True, # Volteo horizontal aleatorio
vertical_flip=False, # No se aplica volteo vertical
preprocessing_function=preprocess_input) # Función de preprocesamiento
# Definir el generador de imágenes para el conjunto de validación con los mismos aumentos de datos
valid_datagen = ImageDataGenerator(
rotation_range=20,
zoom_range=0.2,
width_shift_range=0.1,
height_shift_range=0.1,
horizontal_flip=True,
vertical_flip=False,
preprocessing_function=preprocess_input)
# Crear un generador de lotes de imágenes para el conjunto de entrenamiento
train_generator = train_datagen.flow_from_directory(
train_data_dir, # Directorio que contiene las imágenes de entrenamiento
target_size=(width_shape, height_shape), # Tamaño al que se redimensionarán las imágenes
batch_size=batch_size, # Tamaño del lote
class_mode='categorical') # Modo de clasificación para imágenes categóricas
# Crear un generador de lotes de imágenes para el conjunto de validación
validation_generator = valid_datagen.flow_from_directory(
validation_data_dir, # Directorio que contiene las imágenes de validación
target_size=(width_shape, height_shape), # Tamaño al que se redimensionarán las imágenes
batch_size=batch_size, # Tamaño del lote
class_mode='categorical') # Modo de clasificación para imágenes categóricas
Found 834 images belonging to 5 classes. Found 25 images belonging to 5 classes.
# Definir el número de muestras de entrenamiento y validación
nb_train_samples = 834
nb_validation_samples = 25
# Definir la entrada de la red neuronal con el tamaño de las imágenes
image_input = Input(shape=(width_shape, height_shape, 3))
# Cargar el modelo VGG16 preentrenado con pesos ajustados desde ImageNet
model = VGG16(input_tensor=image_input, include_top=True, weights='imagenet')
# Obtener la salida de la penúltima capa densa del modelo VGG16 (fc2)
last_layer = model.get_layer('fc2').output
# Añadir una nueva capa densa al final del modelo para la clasificación multiclase con regularización L2 (Evita sobreajuste)
out = Dense(num_classes, activation='softmax', kernel_regularizer='l2', name='output')(last_layer)
# Crear un nuevo modelo personalizado que toma la entrada de la imagen y produce la salida clasificada
custom_vgg_model = Model(image_input, out)
# Congelar todas las capas del modelo, excepto la capa densa añadida
for layer in custom_vgg_model.layers[:-1]:
layer.trainable = False
# Compilar el modelo con una función de pérdida, optimizador y métricas especificadas
custom_vgg_model.compile(loss='categorical_crossentropy', optimizer=Adam(learning_rate=0.0001), metrics=['accuracy'])
# Mostrar un resumen del modelo que incluye la arquitectura y el número de parámetros
custom_vgg_model.summary()
# Entrenar el modelo utilizando generadores de datos para el conjunto de entrenamiento y validación
model_history = custom_vgg_model.fit(
train_generator,
epochs=epochs,
validation_data=validation_generator,
steps_per_epoch=nb_train_samples//batch_size, # Número de pasos por época de entrenamiento
validation_steps=nb_validation_samples//batch_size) # Número de pasos por época de validación
Model: "functional_7"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_3 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ output (Dense) │ (None, 5) │ 20,485 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 134,281,029 (512.24 MB)
Trainable params: 20,485 (80.02 KB)
Non-trainable params: 134,260,544 (512.16 MB)
Epoch 1/50
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\keras\src\trainers\data_adapters\py_dataset_adapter.py:120: UserWarning: Your `PyDataset` class should call `super().__init__(**kwargs)` in its constructor. `**kwargs` can include `workers`, `use_multiprocessing`, `max_queue_size`. Do not pass these arguments to `fit()`, as they will be ignored. self._warn_if_super_not_called()
26/26 ━━━━━━━━━━━━━━━━━━━━ 91s 3s/step - accuracy: 0.2838 - loss: 2.2863 - val_accuracy: 0.6000 - val_loss: 1.0493 Epoch 2/50 1/26 ━━━━━━━━━━━━━━━━━━━━ 1:17 3s/step - accuracy: 0.6562 - loss: 1.1432
C:\Users\Oscar Diaz\anaconda3\Lib\contextlib.py:158: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset. self.gen.throw(typ, value, traceback)
26/26 ━━━━━━━━━━━━━━━━━━━━ 6s 114ms/step - accuracy: 0.6562 - loss: 1.1432 - val_accuracy: 0.6000 - val_loss: 1.0793 Epoch 3/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 89s 3s/step - accuracy: 0.7356 - loss: 0.8568 - val_accuracy: 0.8400 - val_loss: 0.5498 Epoch 4/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 6s 114ms/step - accuracy: 0.7812 - loss: 0.5293 - val_accuracy: 0.7600 - val_loss: 0.7074 Epoch 5/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 87s 3s/step - accuracy: 0.8678 - loss: 0.5005 - val_accuracy: 0.8800 - val_loss: 0.4185 Epoch 6/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 6s 121ms/step - accuracy: 0.9375 - loss: 0.2667 - val_accuracy: 0.8000 - val_loss: 0.5027 Epoch 7/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 88s 3s/step - accuracy: 0.9241 - loss: 0.3500 - val_accuracy: 0.9200 - val_loss: 0.3046 Epoch 8/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 6s 123ms/step - accuracy: 0.9375 - loss: 0.3103 - val_accuracy: 0.8400 - val_loss: 0.4757 Epoch 9/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 94s 4s/step - accuracy: 0.9409 - loss: 0.3014 - val_accuracy: 0.8800 - val_loss: 0.2997 Epoch 10/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 6s 112ms/step - accuracy: 0.9375 - loss: 0.2980 - val_accuracy: 0.9200 - val_loss: 0.4188 Epoch 11/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 87s 3s/step - accuracy: 0.9506 - loss: 0.2925 - val_accuracy: 0.9600 - val_loss: 0.2658 Epoch 12/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 6s 111ms/step - accuracy: 0.9688 - loss: 0.2506 - val_accuracy: 0.9200 - val_loss: 0.3867 Epoch 13/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 87s 3s/step - accuracy: 0.9606 - loss: 0.2647 - val_accuracy: 0.9600 - val_loss: 0.2552 Epoch 14/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 6s 112ms/step - accuracy: 1.0000 - loss: 0.1633 - val_accuracy: 0.9600 - val_loss: 0.2620 Epoch 15/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 89s 3s/step - accuracy: 0.9647 - loss: 0.2232 - val_accuracy: 0.9600 - val_loss: 0.1819 Epoch 16/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 6s 121ms/step - accuracy: 1.0000 - loss: 0.1440 - val_accuracy: 0.9600 - val_loss: 0.2634 Epoch 17/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 87s 3s/step - accuracy: 0.9693 - loss: 0.2086 - val_accuracy: 0.9600 - val_loss: 0.2160 Epoch 18/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 6s 114ms/step - accuracy: 0.9062 - loss: 0.2552 - val_accuracy: 1.0000 - val_loss: 0.2158 Epoch 19/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 87s 3s/step - accuracy: 0.9766 - loss: 0.2131 - val_accuracy: 0.9600 - val_loss: 0.1772 Epoch 20/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 6s 114ms/step - accuracy: 0.9688 - loss: 0.1584 - val_accuracy: 1.0000 - val_loss: 0.1633 Epoch 21/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 88s 3s/step - accuracy: 0.9722 - loss: 0.1967 - val_accuracy: 0.9600 - val_loss: 0.2458 Epoch 22/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 6s 110ms/step - accuracy: 0.9688 - loss: 0.1797 - val_accuracy: 0.8800 - val_loss: 0.3268 Epoch 23/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 87s 3s/step - accuracy: 0.9797 - loss: 0.1902 - val_accuracy: 0.9600 - val_loss: 0.1625 Epoch 24/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 6s 113ms/step - accuracy: 0.9688 - loss: 0.1876 - val_accuracy: 1.0000 - val_loss: 0.1730 Epoch 25/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 87s 3s/step - accuracy: 0.9698 - loss: 0.1968 - val_accuracy: 1.0000 - val_loss: 0.1456 Epoch 26/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 6s 109ms/step - accuracy: 1.0000 - loss: 0.1600 - val_accuracy: 1.0000 - val_loss: 0.1515 Epoch 27/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 89s 3s/step - accuracy: 0.9881 - loss: 0.1693 - val_accuracy: 1.0000 - val_loss: 0.1785 Epoch 28/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 6s 111ms/step - accuracy: 0.9688 - loss: 0.1946 - val_accuracy: 0.9600 - val_loss: 0.1978 Epoch 29/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 87s 3s/step - accuracy: 0.9793 - loss: 0.1947 - val_accuracy: 1.0000 - val_loss: 0.1649 Epoch 30/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 6s 111ms/step - accuracy: 1.0000 - loss: 0.1523 - val_accuracy: 1.0000 - val_loss: 0.1205 Epoch 31/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 87s 3s/step - accuracy: 0.9862 - loss: 0.1702 - val_accuracy: 1.0000 - val_loss: 0.1362 Epoch 32/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 6s 114ms/step - accuracy: 0.9062 - loss: 0.2687 - val_accuracy: 0.9600 - val_loss: 0.2206 Epoch 33/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 87s 3s/step - accuracy: 0.9853 - loss: 0.1688 - val_accuracy: 1.0000 - val_loss: 0.1529 Epoch 34/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 6s 112ms/step - accuracy: 1.0000 - loss: 0.1173 - val_accuracy: 0.9200 - val_loss: 0.2665 Epoch 35/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 88s 3s/step - accuracy: 0.9871 - loss: 0.1554 - val_accuracy: 0.9600 - val_loss: 0.1687 Epoch 36/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 6s 112ms/step - accuracy: 1.0000 - loss: 0.1319 - val_accuracy: 1.0000 - val_loss: 0.1514 Epoch 37/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 88s 3s/step - accuracy: 0.9874 - loss: 0.1497 - val_accuracy: 0.9600 - val_loss: 0.1989 Epoch 38/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 6s 111ms/step - accuracy: 0.9688 - loss: 0.1870 - val_accuracy: 1.0000 - val_loss: 0.1098 Epoch 39/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 88s 3s/step - accuracy: 0.9881 - loss: 0.1524 - val_accuracy: 0.9600 - val_loss: 0.2299 Epoch 40/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 6s 109ms/step - accuracy: 1.0000 - loss: 0.1498 - val_accuracy: 0.9600 - val_loss: 0.2152 Epoch 41/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 88s 3s/step - accuracy: 0.9931 - loss: 0.1487 - val_accuracy: 0.9600 - val_loss: 0.1595 Epoch 42/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 6s 113ms/step - accuracy: 0.9688 - loss: 0.1532 - val_accuracy: 1.0000 - val_loss: 0.1128 Epoch 43/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 90s 3s/step - accuracy: 0.9803 - loss: 0.1550 - val_accuracy: 1.0000 - val_loss: 0.1383 Epoch 44/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 6s 109ms/step - accuracy: 1.0000 - loss: 0.1180 - val_accuracy: 0.9200 - val_loss: 0.2364 Epoch 45/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 88s 3s/step - accuracy: 0.9875 - loss: 0.1447 - val_accuracy: 1.0000 - val_loss: 0.1334 Epoch 46/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 6s 116ms/step - accuracy: 1.0000 - loss: 0.1122 - val_accuracy: 0.9200 - val_loss: 0.2801 Epoch 47/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 88s 3s/step - accuracy: 0.9867 - loss: 0.1586 - val_accuracy: 1.0000 - val_loss: 0.1181 Epoch 48/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 6s 111ms/step - accuracy: 1.0000 - loss: 0.1165 - val_accuracy: 0.9600 - val_loss: 0.1809 Epoch 49/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 90s 3s/step - accuracy: 0.9838 - loss: 0.1422 - val_accuracy: 0.9600 - val_loss: 0.1458 Epoch 50/50 26/26 ━━━━━━━━━━━━━━━━━━━━ 3s 111ms/step - accuracy: 1.0000 - loss: 0.0940 - val_accuracy: 0.9600 - val_loss: 0.1969
import os
# Nombre base del modelo
model_name = "model_VGG16_v"
# Extensión del archivo
file_extension = ".keras"
# Directorio donde se guardarán los modelos
model_directory = "models/"
# Inicializar contador
counter = 1
# Generar el nombre completo del archivo
file_name = model_name + file_extension
ruta=model_directory + file_name
print(ruta)
# Verificar si el modelo ya está guardado
while os.path.exists(model_directory + file_name):
# Si el archivo existe, agregar un número al final del nombre del modelo
file_name = f"{model_name}{counter}{file_extension}"
counter += 1
# Guardar el modelo con el nombre único en el directorio correcto
custom_vgg_model.save(model_directory + file_name)
models/model_VGG16_v.keras
def plotTraining(hist, epochs, typeData):
# Seleccionar la figura y establecer el tamaño
# Dependiendo del tipo de datos (loss o accuracy), se elige la figura correspondiente
if typeData=="loss":
plt.figure(1,figsize=(10,5))
yc=hist.history['loss']
xc=range(epochs)
plt.ylabel('Loss', fontsize=24)
plt.plot(xc,yc,'-r',label='Loss Training')
if typeData=="accuracy":
plt.figure(2,figsize=(10,5))
yc=hist.history['accuracy']
for i in range(0, len(yc)):
yc[i]=100*yc[i]
xc=range(epochs)
plt.ylabel('Accuracy (%)', fontsize=24)
plt.plot(xc,yc,'-r',label='Accuracy Training')
if typeData=="val_loss":
plt.figure(1,figsize=(10,5))
yc=hist.history['val_loss']
xc=range(epochs)
plt.ylabel('Loss', fontsize=24)
plt.plot(xc,yc,'--b',label='Loss Validate')
if typeData=="val_accuracy":
plt.figure(2,figsize=(10,5))
yc=hist.history['val_accuracy']
for i in range(0, len(yc)):
yc[i]=100*yc[i]
xc=range(epochs)
plt.ylabel('Accuracy (%)', fontsize=24)
plt.plot(xc,yc,'--b',label='Training Validate')
plt.rc('xtick',labelsize=24)
plt.rc('ytick',labelsize=24)
plt.rc('legend', fontsize=18)
plt.legend()
plt.xlabel('Number of Epochs',fontsize=24)
plt.grid(True)
plotTraining(model_history,epochs,"loss")
plotTraining(model_history,epochs,"accuracy")
plotTraining(model_history,epochs,"val_loss")
plotTraining(model_history,epochs,"val_accuracy")
from keras.applications.imagenet_utils import preprocess_input, decode_predictions
from keras.models import load_model
names = ['CATHARTES AURA', 'COLUMBA LIVIA', 'CORAGYPS ATRATUS', 'CROTOPHAGA SULCIROSTRIS', 'CYANOCORAX YNCAS']
# Cargar el modelo
modelt = load_model(model_directory + file_name)
#modelt = custom_vgg_model
# Ruta de la imagen de prueba
imaget_path = "ImagenPrueba2.jpg"
# Leer la imagen, cambiar tamaño y preprocesar
imaget=cv2.resize(cv2.imread(imaget_path), (width_shape, height_shape), interpolation = cv2.INTER_AREA)
xt = np.asarray(imaget)
xt=preprocess_input(xt)
xt = np.expand_dims(xt,axis=0)
# Obtener las predicciones del modelo
preds = modelt.predict(xt)
# Obtener la clase predicha y su porcentaje de confianza
predicted_class_index = np.argmax(preds)
predicted_class_name = names[predicted_class_index]
confidence_percentage = preds[0][predicted_class_index] * 100
# Imprimir el resultado
print(f'Clase predicha: {predicted_class_name}')
print(f'Porcentaje de confianza: {confidence_percentage:.2f}%')
# Mostrar la imagen
plt.imshow(cv2.cvtColor(np.asarray(imaget), cv2.COLOR_BGR2RGB))
plt.axis('off')
plt.show()
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 816ms/step Clase predicha: CYANOCORAX YNCAS Porcentaje de confianza: 52.63%
from sklearn.metrics import confusion_matrix, f1_score, roc_curve, precision_score, recall_score, accuracy_score, roc_auc_score
from sklearn import metrics
from mlxtend.plotting import plot_confusion_matrix
from keras.models import load_model
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
names = ['CATHARTES AURA', 'COEREBA FLAVEOLA', 'COLUMBA LIVIA', 'CORAGYPS ATRATUS', 'CROTOPHAGA SULCIROSTRIS', 'CYANOCORAX YNCAS', 'EGRETTA THULA', 'FALCO PEREGRINUS', 'FALCO SPARVERIUS', 'HIRUNDO RUSTICA', 'PANDION HALIAETUS', 'PILHERODIUS PILEATUS', 'PITANGUS SULPHURATUS', 'PYRRHOMYIAS CINNAMOMEUS', 'RYNCHOPS NIGER', 'SETOPHAGA FUSCA', 'SYNALLAXIS AZARAE', 'TYRANNUS MELANCHOLICUS']
test_data_dir = 'dataset/test'
test_datagen = ImageDataGenerator()
test_generator = test_datagen.flow_from_directory(
test_data_dir,
target_size=(width_shape, height_shape),
batch_size = batch_size,
class_mode='categorical',
shuffle=False)
custom_Model= load_model(model_directory + file_name)
predictions = custom_Model.predict(test_generator)
y_pred = np.argmax(predictions, axis=1)
y_real = test_generator.classes
matc=confusion_matrix(y_real, y_pred)
print(metrics.classification_report(y_real,y_pred, digits = 4))
Found 27 images belonging to 5 classes. 1/1 ━━━━━━━━━━━━━━━━━━━━ 3s 3s/step precision recall f1-score support 0 1.0000 0.2000 0.3333 5 1 0.8333 1.0000 0.9091 5 2 0.7143 1.0000 0.8333 5 3 0.8333 1.0000 0.9091 5 4 1.0000 1.0000 1.0000 7 accuracy 0.8519 27 macro avg 0.8762 0.8400 0.7970 27 weighted avg 0.8854 0.8519 0.8120 27
prueba 5¶
from keras.models import Sequential, Model
from keras.layers import Conv2D, MaxPool2D, Dense, Flatten, Dropout, BatchNormalization, Input
from keras.optimizers import Adam
from keras.callbacks import TensorBoard, ModelCheckpoint
from keras.utils import to_categorical
import os
import numpy as np
from keras.preprocessing import image
from keras.applications.imagenet_utils import preprocess_input, decode_predictions
from keras.applications.vgg16 import VGG16
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import load_model
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
width_shape = 224
height_shape = 224
num_classes = 18
epochs = 50
batch_size = 32
train_data_dir = 'datasetpreprocesado/train'
validation_data_dir = 'datasetpreprocesado/valid'
# Definir el generador de imágenes para el conjunto de entrenamiento con aumentos de datos
train_datagen = ImageDataGenerator(
rotation_range=20, # Rango de grados para rotación aleatoria
zoom_range=0.2, # Rango de zoom aleatorio
width_shift_range=0.1, # Rango de desplazamiento horizontal aleatorio
height_shift_range=0.1, # Rango de desplazamiento vertical aleatorio
horizontal_flip=True, # Volteo horizontal aleatorio
vertical_flip=False, # No se aplica volteo vertical
preprocessing_function=preprocess_input) # Función de preprocesamiento
# Definir el generador de imágenes para el conjunto de validación con los mismos aumentos de datos
valid_datagen = ImageDataGenerator(
rotation_range=20,
zoom_range=0.2,
width_shift_range=0.1,
height_shift_range=0.1,
horizontal_flip=True,
vertical_flip=False,
preprocessing_function=preprocess_input)
# Crear un generador de lotes de imágenes para el conjunto de entrenamiento
train_generator = train_datagen.flow_from_directory(
train_data_dir, # Directorio que contiene las imágenes de entrenamiento
target_size=(width_shape, height_shape), # Tamaño al que se redimensionarán las imágenes
batch_size=batch_size, # Tamaño del lote
class_mode='categorical') # Modo de clasificación para imágenes categóricas
# Crear un generador de lotes de imágenes para el conjunto de validación
validation_generator = valid_datagen.flow_from_directory(
validation_data_dir, # Directorio que contiene las imágenes de validación
target_size=(width_shape, height_shape), # Tamaño al que se redimensionarán las imágenes
batch_size=batch_size, # Tamaño del lote
class_mode='categorical') # Modo de clasificación para imágenes categóricas
Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
# Definir el número de muestras de entrenamiento y validación
nb_train_samples = 2621
nb_validation_samples = 738
# Definir la entrada de la red neuronal con el tamaño de las imágenes
image_input = Input(shape=(width_shape, height_shape, 3))
# Cargar el modelo VGG16 preentrenado con pesos ajustados desde ImageNet
model = VGG16(input_tensor=image_input, include_top=True, weights='imagenet')
# Obtener la salida de la penúltima capa densa del modelo VGG16 (fc2)
last_layer = model.get_layer('fc2').output
# Añadir una nueva capa densa al final del modelo para la clasificación multiclase con regularización L2 (Evita sobreajuste)
out = Dense(num_classes, activation='softmax', kernel_regularizer='l2', name='output')(last_layer)
# Crear un nuevo modelo personalizado que toma la entrada de la imagen y produce la salida clasificada
custom_vgg_model = Model(image_input, out)
# Congelar todas las capas del modelo, excepto la capa densa añadida
for layer in custom_vgg_model.layers[:-1]:
layer.trainable = False
# Compilar el modelo con una función de pérdida, optimizador y métricas especificadas
custom_vgg_model.compile(loss='categorical_crossentropy', optimizer=Adam(learning_rate=0.0001), metrics=['accuracy'])
# Mostrar un resumen del modelo que incluye la arquitectura y el número de parámetros
custom_vgg_model.summary()
# Entrenar el modelo utilizando generadores de datos para el conjunto de entrenamiento y validación
model_history = custom_vgg_model.fit(
train_generator,
epochs=epochs,
validation_data=validation_generator,
steps_per_epoch=nb_train_samples//batch_size, # Número de pasos por época de entrenamiento
validation_steps=nb_validation_samples//batch_size) # Número de pasos por época de validación
import os
# Nombre base del modelo
model_name = "model_VGG16_v"
# Extensión del archivo
file_extension = ".keras"
# Directorio donde se guardarán los modelos
model_directory = "models/"
# Inicializar contador
counter = 1
# Generar el nombre completo del archivo
file_name = model_name + file_extension
ruta=model_directory + file_name
print(ruta)
# Verificar si el modelo ya está guardado
while os.path.exists(model_directory + file_name):
# Si el archivo existe, agregar un número al final del nombre del modelo
file_name = f"{model_name}{counter}{file_extension}"
counter += 1
# Guardar el modelo con el nombre único en el directorio correcto
custom_vgg_model.save(model_directory + file_name)
models/model_VGG16_v.keras
def plotTraining(hist, epochs, typeData):
# Seleccionar la figura y establecer el tamaño
# Dependiendo del tipo de datos (loss o accuracy), se elige la figura correspondiente
if typeData=="loss":
plt.figure(1,figsize=(10,5))
yc=hist.history['loss']
xc=range(epochs)
plt.ylabel('Loss', fontsize=24)
plt.plot(xc,yc,'-r',label='Loss Training')
if typeData=="accuracy":
plt.figure(2,figsize=(10,5))
yc=hist.history['accuracy']
for i in range(0, len(yc)):
yc[i]=100*yc[i]
xc=range(epochs)
plt.ylabel('Accuracy (%)', fontsize=24)
plt.plot(xc,yc,'-r',label='Accuracy Training')
if typeData=="val_loss":
plt.figure(1,figsize=(10,5))
yc=hist.history['val_loss']
xc=range(epochs)
plt.ylabel('Loss', fontsize=24)
plt.plot(xc,yc,'--b',label='Loss Validate')
if typeData=="val_accuracy":
plt.figure(2,figsize=(10,5))
yc=hist.history['val_accuracy']
for i in range(0, len(yc)):
yc[i]=100*yc[i]
xc=range(epochs)
plt.ylabel('Accuracy (%)', fontsize=24)
plt.plot(xc,yc,'--b',label='Training Validate')
plt.rc('xtick',labelsize=24)
plt.rc('ytick',labelsize=24)
plt.rc('legend', fontsize=18)
plt.legend()
plt.xlabel('Number of Epochs',fontsize=24)
plt.grid(True)
def plotTraining(hist, epochs, typeData):
# Seleccionar la figura y establecer el tamaño
# Dependiendo del tipo de datos (loss o accuracy), se elige la figura correspondiente
if typeData=="loss":
plt.figure(1,figsize=(10,5))
yc=hist.history['loss']
xc=range(epochs)
plt.ylabel('Loss', fontsize=24)
plt.plot(xc,yc,'-r',label='Loss Training')
if typeData=="accuracy":
plt.figure(2,figsize=(10,5))
yc=hist.history['accuracy']
for i in range(0, len(yc)):
yc[i]=100*yc[i]
xc=range(epochs)
plt.ylabel('Accuracy (%)', fontsize=24)
plt.plot(xc,yc,'-r',label='Accuracy Training')
if typeData=="val_loss":
plt.figure(1,figsize=(10,5))
yc=hist.history['val_loss']
xc=range(epochs)
plt.ylabel('Loss', fontsize=24)
plt.plot(xc,yc,'--b',label='Loss Validate')
if typeData=="val_accuracy":
plt.figure(2,figsize=(10,5))
yc=hist.history['val_accuracy']
for i in range(0, len(yc)):
yc[i]=100*yc[i]
xc=range(epochs)
plt.ylabel('Accuracy (%)', fontsize=24)
plt.plot(xc,yc,'--b',label='Training Validate')
plt.rc('xtick',labelsize=24)
plt.rc('ytick',labelsize=24)
plt.rc('legend', fontsize=18)
plt.legend()
plt.xlabel('Number of Epochs',fontsize=24)
plt.grid(True)
plotTraining(model_history,epochs,"loss")
plotTraining(model_history,epochs,"accuracy")
plotTraining(model_history,epochs,"val_loss")
plotTraining(model_history,epochs,"val_accuracy")
from sklearn.metrics import confusion_matrix, f1_score, roc_curve, precision_score, recall_score, accuracy_score, roc_auc_score
from sklearn import metrics
from mlxtend.plotting import plot_confusion_matrix
from keras.models import load_model
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
names = ['CATHARTES AURA', 'COEREBA FLAVEOLA', 'COLUMBA LIVIA', 'CORAGYPS ATRATUS', 'CROTOPHAGA SULCIROSTRIS', 'CYANOCORAX YNCAS', 'EGRETTA THULA', 'FALCO PEREGRINUS', 'FALCO SPARVERIUS', 'HIRUNDO RUSTICA', 'PANDION HALIAETUS', 'PILHERODIUS PILEATUS', 'PITANGUS SULPHURATUS', 'PYRRHOMYIAS CINNAMOMEUS', 'RYNCHOPS NIGER', 'SETOPHAGA FUSCA', 'SYNALLAXIS AZARAE', 'TYRANNUS MELANCHOLICUS']
test_data_dir = 'datasetpreprocesado/test'
test_datagen = ImageDataGenerator()
test_generator = test_datagen.flow_from_directory(
test_data_dir,
target_size=(width_shape, height_shape),
batch_size = batch_size,
class_mode='categorical',
shuffle=False)
custom_Model= load_model('models/model_VGG16_v5.keras')
predictions = custom_Model.predict(test_generator)
y_pred = np.argmax(predictions, axis=1)
y_real = test_generator.classes
matc=confusion_matrix(y_real, y_pred)
print(metrics.classification_report(y_real,y_pred, digits = 4))
Found 738 images belonging to 18 classes.
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\keras\src\trainers\data_adapters\py_dataset_adapter.py:120: UserWarning: Your `PyDataset` class should call `super().__init__(**kwargs)` in its constructor. `**kwargs` can include `workers`, `use_multiprocessing`, `max_queue_size`. Do not pass these arguments to `fit()`, as they will be ignored. self._warn_if_super_not_called()
24/24 ━━━━━━━━━━━━━━━━━━━━ 76s 3s/step precision recall f1-score support 0 1.0000 0.2927 0.4528 41 1 0.8095 0.4146 0.5484 41 2 0.6949 1.0000 0.8200 41 3 0.7400 0.9024 0.8132 41 4 0.4933 0.9024 0.6379 41 5 0.5600 0.6829 0.6154 41 6 0.6308 1.0000 0.7736 41 7 0.6271 0.9024 0.7400 41 8 0.9118 0.7561 0.8267 41 9 0.2871 0.7073 0.4085 41 10 0.7750 0.7561 0.7654 41 11 0.6500 0.6341 0.6420 41 12 1.0000 0.2683 0.4231 41 13 0.8000 0.0976 0.1739 41 14 0.8529 0.7073 0.7733 41 15 1.0000 0.1951 0.3265 41 16 0.4091 0.4390 0.4235 41 17 0.7333 0.5366 0.6197 41 accuracy 0.6220 738 macro avg 0.7208 0.6220 0.5991 738 weighted avg 0.7208 0.6220 0.5991 738
from sklearn.metrics import confusion_matrix, f1_score, roc_curve, precision_score, recall_score, accuracy_score, roc_auc_score
from sklearn import metrics
from mlxtend.plotting import plot_confusion_matrix
from keras.models import load_model
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
names = [ 'CATHARTES AURA', 'COEREBA FLAVEOLA', 'COLUMBA LIVIA', 'CORAGYPS ATRATUS', 'CROTOPHAGA SULCIROSTRIS', 'CYANOCORAX YNCAS', 'EGRETTA THULA', 'FALCO PEREGRINUS', 'FALCO SPARVERIUS', 'HIRUNDO RUSTICA', 'PANDION HALIAETUS', 'PILHERODIUS PILEATUS', 'PITANGUS SULPHURATUS', 'PYRRHOMYIAS CINNAMOMEUS', 'RYNCHOPS NIGER', 'SETOPHAGA FUSCA', 'SYNALLAXIS AZARAE', 'TYRANNUS MELANCHOLICUS']
test_data_dir = 'datasetpreprocesado/test'
test_datagen = ImageDataGenerator()
test_generator = test_datagen.flow_from_directory(
test_data_dir,
target_size=(width_shape, height_shape),
batch_size = batch_size,
class_mode='categorical',
shuffle=False)
custom_Model50= load_model('models/model_VGG16_v5.keras')
predictions = custom_Model50.predict(test_generator)
y_pred = np.argmax(predictions, axis=1)
y_real = test_generator.classes
matc=confusion_matrix(y_real, y_pred)
plot_confusion_matrix(conf_mat=matc, figsize=(9,9), class_names = names, show_normed=False)
plt.tight_layout()
print(metrics.classification_report(y_real,y_pred, digits = 4))
Found 361 images belonging to 18 classes.
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\keras\src\trainers\data_adapters\py_dataset_adapter.py:120: UserWarning: Your `PyDataset` class should call `super().__init__(**kwargs)` in its constructor. `**kwargs` can include `workers`, `use_multiprocessing`, `max_queue_size`. Do not pass these arguments to `fit()`, as they will be ignored. self._warn_if_super_not_called()
5/6 ━━━━━━━━━━━━━━━━━━━━ 6s 7s/step WARNING:tensorflow:5 out of the last 13 calls to <function TensorFlowTrainer.make_predict_function.<locals>.one_step_on_data_distributed at 0x00000153108E7100> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. 6/6 ━━━━━━━━━━━━━━━━━━━━ 40s 6s/step precision recall f1-score support 0 1.0000 0.2000 0.3333 20 1 0.9167 0.5500 0.6875 20 2 0.7037 0.9500 0.8085 20 3 0.5862 0.8500 0.6939 20 4 0.4146 0.8500 0.5574 20 5 0.5357 0.7500 0.6250 20 6 0.5714 1.0000 0.7273 20 7 0.6296 0.8500 0.7234 20 8 0.9231 0.6000 0.7273 20 9 0.3200 0.8000 0.4571 20 10 0.7222 0.6500 0.6842 20 11 0.5500 0.5500 0.5500 20 12 1.0000 0.2000 0.3333 20 13 0.5000 0.0500 0.0909 20 14 1.0000 0.8000 0.8889 20 15 1.0000 0.0500 0.0952 20 16 0.6316 0.6000 0.6154 20 17 0.6667 0.4762 0.5556 21 accuracy 0.5983 361 macro avg 0.7040 0.5987 0.5641 361 weighted avg 0.7039 0.5983 0.5641 361
prueba 6¶
width_shape = 224
height_shape = 224
num_classes = 18
epochs = 50
batch_size = 32
train_data_dir = 'datasetpreprocesado/train'
validation_data_dir = 'datasetpreprocesado/valid'
# Definir el generador de imágenes para el conjunto de entrenamiento con aumentos de datos
train_datagen = ImageDataGenerator(
rotation_range=20, # Rango de grados para rotación aleatoria
zoom_range=0.2, # Rango de zoom aleatorio
width_shift_range=0.1, # Rango de desplazamiento horizontal aleatorio
height_shift_range=0.1, # Rango de desplazamiento vertical aleatorio
horizontal_flip=True, # Volteo horizontal aleatorio
vertical_flip=False, # No se aplica volteo vertical
preprocessing_function=preprocess_input) # Función de preprocesamiento
# Definir el generador de imágenes para el conjunto de validación con los mismos aumentos de datos
valid_datagen = ImageDataGenerator(
rotation_range=20,
zoom_range=0.2,
width_shift_range=0.1,
height_shift_range=0.1,
horizontal_flip=True,
vertical_flip=False,
preprocessing_function=preprocess_input)
# Crear un generador de lotes de imágenes para el conjunto de entrenamiento
train_generator = train_datagen.flow_from_directory(
train_data_dir, # Directorio que contiene las imágenes de entrenamiento
target_size=(width_shape, height_shape), # Tamaño al que se redimensionarán las imágenes
batch_size=batch_size, # Tamaño del lote
class_mode='categorical') # Modo de clasificación para imágenes categóricas
# Crear un generador de lotes de imágenes para el conjunto de validación
validation_generator = valid_datagen.flow_from_directory(
validation_data_dir, # Directorio que contiene las imágenes de validación
target_size=(width_shape, height_shape), # Tamaño al que se redimensionarán las imágenes
batch_size=batch_size, # Tamaño del lote
class_mode='categorical') # Modo de clasificación para imágenes categóricas
Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Para aplicar la técnica de ajuste de hiperparámetros, como la tasa de aprendizaje, la regularización y el tamaño del lote, podemos realizar una búsqueda de cuadrícula para encontrar la combinación óptima de estos hiperparámetros que maximice la precisión y la sensibilidad del modelo en el conjunto de validación.
Aquí hay un ejemplo de cómo podríamos ajustar la tasa de aprendizaje y el tamaño del lote utilizando una búsqueda de cuadrícula:
# Importaciones necesarias
import time
import psutil
import numpy as np
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense, Flatten
from tensorflow.keras.applications.vgg16 import VGG16, preprocess_input
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# Definir el número de muestras de entrenamiento y validación
nb_train_samples = 2621
nb_validation_samples = 738
# Definir el número de épocas
epochs = 50
# Definir el tamaño de las imágenes
width_shape = 224
height_shape = 224
# Definir el número de clases
num_classes = 18 # Ajustar según el número de clases en tu dataset
# Directorios de datos de entrenamiento y validación
train_data_dir = 'datasetpreprocesado/train'
validation_data_dir = 'datasetpreprocesado/valid'
# Función para crear y entrenar el modelo
def create_and_train_vgg16_model(learning_rate, l2_regularization, batch_size):
# Crear generadores de datos con el batch_size proporcionado
train_datagen = ImageDataGenerator(
rotation_range=20,
zoom_range=0.2,
width_shift_range=0.1,
height_shift_range=0.1,
horizontal_flip=True,
vertical_flip=False,
preprocessing_function=preprocess_input
)
valid_datagen = ImageDataGenerator(
rotation_range=20,
zoom_range=0.2,
width_shift_range=0.1,
height_shift_range=0.1,
horizontal_flip=True,
vertical_flip=False,
preprocessing_function=preprocess_input
)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(width_shape, height_shape),
batch_size=batch_size,
class_mode='categorical'
)
validation_generator = valid_datagen.flow_from_directory(
validation_data_dir,
target_size=(width_shape, height_shape),
batch_size=batch_size,
class_mode='categorical'
)
# Definir la entrada de la red neuronal con el tamaño de las imágenes
image_input = Input(shape=(width_shape, height_shape, 3))
# Cargar el modelo VGG16 preentrenado con pesos ajustados desde ImageNet
model = VGG16(input_tensor=image_input, include_top=False, weights='imagenet')
# Aplanar la salida del VGG16
x = Flatten()(model.output)
# Añadir una nueva capa densa al final del modelo para la clasificación multiclase con regularización L2
out = Dense(num_classes, activation='softmax', kernel_regularizer='l2')(x)
# Crear un nuevo modelo personalizado que toma la entrada de la imagen y produce la salida clasificada
custom_vgg_model = Model(inputs=model.input, outputs=out)
# Congelar todas las capas del modelo base VGG16
for layer in model.layers:
layer.trainable = False
# Compilar el modelo con una función de pérdida, optimizador y métricas especificadas
custom_vgg_model.compile(loss='categorical_crossentropy', optimizer=Adam(learning_rate=learning_rate), metrics=['accuracy'])
# Mostrar un resumen del modelo que incluye la arquitectura y el número de parámetros
custom_vgg_model.summary()
# Medir el tiempo y el uso de CPU/memoria antes de entrenar
start_time = time.time()
start_cpu = psutil.cpu_percent(interval=None)
start_memory = psutil.virtual_memory().used
# Crear los callbacks para Early Stopping y guardar el mejor modelo
checkpoint = ModelCheckpoint('best_model.keras', monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')
early_stopping = EarlyStopping(monitor='val_accuracy', patience=10, verbose=1, restore_best_weights=True)
# Entrenar el modelo utilizando generadores de datos para el conjunto de entrenamiento y validación
model_history = custom_vgg_model.fit(
train_generator,
epochs=epochs,
validation_data=validation_generator,
steps_per_epoch=nb_train_samples // batch_size,
validation_steps=nb_validation_samples // batch_size,
callbacks=[checkpoint, early_stopping]
)
# Medir el tiempo y el uso de CPU/memoria después de entrenar
end_time = time.time()
end_cpu = psutil.cpu_percent(interval=None)
end_memory = psutil.virtual_memory().used
# Calcular métricas de tiempo y uso de recursos
elapsed_time = end_time - start_time
cpu_usage = end_cpu - start_cpu
memory_usage = end_memory - start_memory
print(f"Tiempo transcurrido para el entrenamiento: {elapsed_time} segundos")
print(f"Uso de CPU durante el entrenamiento: {cpu_usage}%")
print(f"Aumento en uso de memoria: {memory_usage / (1024 ** 3)} GB")
return model_history, elapsed_time, cpu_usage, memory_usage
# Definir rangos de búsqueda para hiperparámetros
learning_rates = [0.0001, 0.0005, 0.001]
l2_regularizations = [0.01, 0.05, 0.1]
batch_sizes = [16, 32, 64]
# Variables para almacenar los mejores hiperparámetros y su rendimiento
best_val_accuracy = 0
best_hyperparams = {}
# Realizar la búsqueda de cuadrícula
for learning_rate in learning_rates:
for l2_regularization in l2_regularizations:
for batch_size in batch_sizes:
# Crear y entrenar el modelo con los hiperparámetros actuales
model_history, elapsed_time, cpu_usage, memory_usage = create_and_train_vgg16_model(learning_rate, l2_regularization, batch_size)
# Obtener la mejor precisión de validación de esta combinación de hiperparámetros
val_accuracy = np.max(model_history.history['val_accuracy'])
# Imprimir los resultados
print(f"Resultados para lr={learning_rate}, l2={l2_regularization}, batch_size={batch_size}:")
print(f"Tiempo: {elapsed_time} segundos, CPU: {cpu_usage}%, Memoria: {memory_usage / (1024 ** 3)} GB")
print(f"Precisión de validación: {val_accuracy}")
# Actualizar los mejores hiperparámetros si la precisión de validación mejora
if val_accuracy > best_val_accuracy:
best_val_accuracy = val_accuracy
best_hyperparams = {
'learning_rate': learning_rate,
'l2_regularization': l2_regularization,
'batch_size': batch_size,
'val_accuracy': val_accuracy,
'elapsed_time': elapsed_time,
'cpu_usage': cpu_usage,
'memory_usage': memory_usage
}
# Imprimir los mejores hiperparámetros y su rendimiento
print("Mejores hiperparámetros encontrados:")
print(best_hyperparams)
Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_3"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_1 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_1 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_1 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\keras\src\trainers\data_adapters\py_dataset_adapter.py:120: UserWarning: Your `PyDataset` class should call `super().__init__(**kwargs)` in its constructor. `**kwargs` can include `workers`, `use_multiprocessing`, `max_queue_size`. Do not pass these arguments to `fit()`, as they will be ignored. self._warn_if_super_not_called()
163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.3239 - loss: 15.1189 Epoch 1: val_accuracy improved from -inf to 0.76630, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 357s 2s/step - accuracy: 0.3251 - loss: 15.0816 - val_accuracy: 0.7663 - val_loss: 3.0483 Epoch 2/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.6875 - loss: 6.4706
C:\Users\Oscar Diaz\anaconda3\Lib\contextlib.py:158: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset. self.gen.throw(typ, value, traceback)
Epoch 2: val_accuracy improved from 0.76630 to 1.00000, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.6875 - loss: 6.4706 - val_accuracy: 1.0000 - val_loss: 0.3301 Epoch 3/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.8212 - loss: 2.2798 Epoch 3: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 349s 2s/step - accuracy: 0.8213 - loss: 2.2797 - val_accuracy: 0.8492 - val_loss: 2.0425 Epoch 4/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:15 2s/step - accuracy: 0.8125 - loss: 1.3523 Epoch 4: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.8125 - loss: 1.3523 - val_accuracy: 0.5000 - val_loss: 2.3321 Epoch 5/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.8915 - loss: 1.4946 Epoch 5: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 344s 2s/step - accuracy: 0.8915 - loss: 1.4945 - val_accuracy: 0.8995 - val_loss: 1.4449 Epoch 6/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:27 2s/step - accuracy: 0.9375 - loss: 0.6051 Epoch 6: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.9375 - loss: 0.6051 - val_accuracy: 1.0000 - val_loss: 0.3322 Epoch 7/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9061 - loss: 1.1731 Epoch 7: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 343s 2s/step - accuracy: 0.9062 - loss: 1.1730 - val_accuracy: 0.9212 - val_loss: 1.2573 Epoch 8/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:23 2s/step - accuracy: 0.9375 - loss: 0.6550 Epoch 8: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.9375 - loss: 0.6550 - val_accuracy: 1.0000 - val_loss: 0.3332 Epoch 9/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9381 - loss: 0.9227 Epoch 9: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 343s 2s/step - accuracy: 0.9381 - loss: 0.9227 - val_accuracy: 0.9239 - val_loss: 1.1713 Epoch 10/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 5:03 2s/step - accuracy: 0.9375 - loss: 1.0189 Epoch 10: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.9375 - loss: 1.0189 - val_accuracy: 1.0000 - val_loss: 0.3343 Epoch 11/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9698 - loss: 0.6550 Epoch 11: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 347s 2s/step - accuracy: 0.9697 - loss: 0.6555 - val_accuracy: 0.9511 - val_loss: 0.8576 Epoch 12/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:11 2s/step - accuracy: 0.8750 - loss: 1.2820 Epoch 12: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.8750 - loss: 1.2820 - val_accuracy: 1.0000 - val_loss: 0.3340 Epoch 12: early stopping Restoring model weights from the end of the best epoch: 2. Tiempo transcurrido para el entrenamiento: 2095.2359404563904 segundos Uso de CPU durante el entrenamiento: 69.80000000000001% Aumento en uso de memoria: 0.4678535461425781 GB Resultados para lr=0.0001, l2=0.01, batch_size=16: Tiempo: 2095.2359404563904 segundos, CPU: 69.80000000000001%, Memoria: 0.4678535461425781 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_5"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_2 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_2 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_2 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.2720 - loss: 15.9114 Epoch 1: val_accuracy improved from -inf to 0.70788, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 347s 4s/step - accuracy: 0.2740 - loss: 15.8460 - val_accuracy: 0.7079 - val_loss: 4.0938 Epoch 2/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:14 3s/step - accuracy: 0.8125 - loss: 1.8199 Epoch 2: val_accuracy improved from 0.70788 to 1.00000, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 8ms/step - accuracy: 0.8125 - loss: 1.8199 - val_accuracy: 1.0000 - val_loss: 0.3431 Epoch 3/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.7644 - loss: 3.3503 Epoch 3: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 352s 4s/step - accuracy: 0.7647 - loss: 3.3437 - val_accuracy: 0.8274 - val_loss: 2.2608 Epoch 4/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:04 3s/step - accuracy: 0.8438 - loss: 4.1966 Epoch 4: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 0.8438 - loss: 4.1966 - val_accuracy: 1.0000 - val_loss: 0.3452 Epoch 5/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.8564 - loss: 1.8617 Epoch 5: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 341s 4s/step - accuracy: 0.8565 - loss: 1.8605 - val_accuracy: 0.8777 - val_loss: 1.5264 Epoch 6/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:03 3s/step - accuracy: 0.9062 - loss: 1.0022 Epoch 6: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.9062 - loss: 1.0022 - val_accuracy: 0.5000 - val_loss: 9.8272 Epoch 7/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.8963 - loss: 1.3334 Epoch 7: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 342s 4s/step - accuracy: 0.8963 - loss: 1.3335 - val_accuracy: 0.8967 - val_loss: 1.5497 Epoch 8/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:22 3s/step - accuracy: 0.8438 - loss: 2.6329 Epoch 8: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - accuracy: 0.8438 - loss: 2.6329 - val_accuracy: 1.0000 - val_loss: 0.3432 Epoch 9/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9127 - loss: 1.1222 Epoch 9: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 341s 4s/step - accuracy: 0.9128 - loss: 1.1220 - val_accuracy: 0.9389 - val_loss: 0.8494 Epoch 10/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:01 3s/step - accuracy: 0.9688 - loss: 0.9636 Epoch 10: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 0.9688 - loss: 0.9636 - val_accuracy: 1.0000 - val_loss: 0.3432 Epoch 11/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9344 - loss: 0.9563 Epoch 11: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 344s 4s/step - accuracy: 0.9344 - loss: 0.9563 - val_accuracy: 0.9389 - val_loss: 0.9235 Epoch 12/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:15 3s/step - accuracy: 0.9688 - loss: 0.6038 Epoch 12: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - accuracy: 0.9688 - loss: 0.6038 - val_accuracy: 0.5000 - val_loss: 14.1933 Epoch 12: early stopping Restoring model weights from the end of the best epoch: 2. Tiempo transcurrido para el entrenamiento: 2088.1648399829865 segundos Uso de CPU durante el entrenamiento: 52.4% Aumento en uso de memoria: 0.7475776672363281 GB Resultados para lr=0.0001, l2=0.01, batch_size=32: Tiempo: 2088.1648399829865 segundos, CPU: 52.4%, Memoria: 0.7475776672363281 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_7"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_3 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_3 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_3 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.1922 - loss: 19.0671 Epoch 1: val_accuracy improved from -inf to 0.63210, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 341s 8s/step - accuracy: 0.1956 - loss: 18.9325 - val_accuracy: 0.6321 - val_loss: 5.9457 Epoch 2/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:12 6s/step - accuracy: 0.7031 - loss: 3.8762 Epoch 2: val_accuracy did not improve from 0.63210 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 87ms/step - accuracy: 0.7031 - loss: 3.8762 - val_accuracy: 0.5294 - val_loss: 7.1882 Epoch 3/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.6876 - loss: 4.1880 Epoch 3: val_accuracy improved from 0.63210 to 0.76705, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 342s 8s/step - accuracy: 0.6885 - loss: 4.1744 - val_accuracy: 0.7670 - val_loss: 3.3744 Epoch 4/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:05 6s/step - accuracy: 0.8125 - loss: 3.1874 Epoch 4: val_accuracy improved from 0.76705 to 0.79412, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 93ms/step - accuracy: 0.8125 - loss: 3.1874 - val_accuracy: 0.7941 - val_loss: 2.1933 Epoch 5/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.7991 - loss: 2.5200 Epoch 5: val_accuracy improved from 0.79412 to 0.83381, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.7996 - loss: 2.5144 - val_accuracy: 0.8338 - val_loss: 2.2552 Epoch 6/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:04 6s/step - accuracy: 0.8438 - loss: 2.4263 Epoch 6: val_accuracy improved from 0.83381 to 0.88235, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 102ms/step - accuracy: 0.8438 - loss: 2.4263 - val_accuracy: 0.8824 - val_loss: 2.1459 Epoch 7/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.8786 - loss: 1.4324 Epoch 7: val_accuracy did not improve from 0.88235 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.8785 - loss: 1.4341 - val_accuracy: 0.8693 - val_loss: 1.6446 Epoch 8/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:00 6s/step - accuracy: 0.7656 - loss: 3.4719 Epoch 8: val_accuracy did not improve from 0.88235 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 89ms/step - accuracy: 0.7656 - loss: 3.4719 - val_accuracy: 0.8529 - val_loss: 2.0173 Epoch 9/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.8951 - loss: 1.4887 Epoch 9: val_accuracy improved from 0.88235 to 0.88636, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 333s 8s/step - accuracy: 0.8952 - loss: 1.4857 - val_accuracy: 0.8864 - val_loss: 1.3530 Epoch 10/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:53 6s/step - accuracy: 0.9219 - loss: 0.9825 Epoch 10: val_accuracy improved from 0.88636 to 0.91176, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 101ms/step - accuracy: 0.9219 - loss: 0.9825 - val_accuracy: 0.9118 - val_loss: 1.8430 Epoch 11/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9223 - loss: 1.0613 Epoch 11: val_accuracy did not improve from 0.91176 40/40 ━━━━━━━━━━━━━━━━━━━━ 334s 8s/step - accuracy: 0.9222 - loss: 1.0610 - val_accuracy: 0.8991 - val_loss: 1.4575 Epoch 12/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:06 6s/step - accuracy: 0.9531 - loss: 0.6821 Epoch 12: val_accuracy improved from 0.91176 to 0.97059, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 92ms/step - accuracy: 0.9531 - loss: 0.6821 - val_accuracy: 0.9706 - val_loss: 1.2731 Epoch 13/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9265 - loss: 1.0374 Epoch 13: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 335s 8s/step - accuracy: 0.9266 - loss: 1.0358 - val_accuracy: 0.9091 - val_loss: 1.1663 Epoch 14/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:05 6s/step - accuracy: 0.9375 - loss: 0.9251 Epoch 14: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 103ms/step - accuracy: 0.9375 - loss: 0.9251 - val_accuracy: 0.8824 - val_loss: 1.1990 Epoch 15/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9472 - loss: 0.7411 Epoch 15: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.9471 - loss: 0.7420 - val_accuracy: 0.9332 - val_loss: 1.0630 Epoch 16/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:56 6s/step - accuracy: 0.9219 - loss: 0.8108 Epoch 16: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 85ms/step - accuracy: 0.9219 - loss: 0.8108 - val_accuracy: 0.9412 - val_loss: 0.7146 Epoch 17/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9418 - loss: 0.8265 Epoch 17: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 339s 8s/step - accuracy: 0.9419 - loss: 0.8258 - val_accuracy: 0.9261 - val_loss: 0.9895 Epoch 18/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:04 6s/step - accuracy: 0.9531 - loss: 0.8481 Epoch 18: val_accuracy improved from 0.97059 to 1.00000, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 91ms/step - accuracy: 0.9531 - loss: 0.8481 - val_accuracy: 1.0000 - val_loss: 0.3511 Epoch 19/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9609 - loss: 0.7226 Epoch 19: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 332s 8s/step - accuracy: 0.9609 - loss: 0.7225 - val_accuracy: 0.9361 - val_loss: 1.1457 Epoch 20/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:52 6s/step - accuracy: 0.9688 - loss: 0.5045 Epoch 20: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 84ms/step - accuracy: 0.9688 - loss: 0.5045 - val_accuracy: 0.8529 - val_loss: 0.9922 Epoch 21/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9587 - loss: 0.7028 Epoch 21: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 332s 8s/step - accuracy: 0.9587 - loss: 0.7022 - val_accuracy: 0.9361 - val_loss: 1.0594 Epoch 22/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:58 6s/step - accuracy: 0.9688 - loss: 0.5983 Epoch 22: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 93ms/step - accuracy: 0.9688 - loss: 0.5983 - val_accuracy: 0.9118 - val_loss: 0.9925 Epoch 23/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9646 - loss: 0.5919 Epoch 23: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 329s 8s/step - accuracy: 0.9646 - loss: 0.5918 - val_accuracy: 0.9503 - val_loss: 0.6772 Epoch 24/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:14 7s/step - accuracy: 0.9531 - loss: 0.4978 Epoch 24: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 86ms/step - accuracy: 0.9531 - loss: 0.4978 - val_accuracy: 0.9706 - val_loss: 0.9520 Epoch 25/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9658 - loss: 0.5349 Epoch 25: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 331s 8s/step - accuracy: 0.9659 - loss: 0.5353 - val_accuracy: 0.9531 - val_loss: 0.7616 Epoch 26/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:56 6s/step - accuracy: 1.0000 - loss: 0.3496 Epoch 26: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 84ms/step - accuracy: 1.0000 - loss: 0.3496 - val_accuracy: 0.9706 - val_loss: 0.5385 Epoch 27/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.9776 - loss: 0.4854 Epoch 27: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 340s 8s/step - accuracy: 0.9775 - loss: 0.4859 - val_accuracy: 0.9474 - val_loss: 0.8444 Epoch 28/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:04 6s/step - accuracy: 0.9688 - loss: 0.5569 Epoch 28: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 85ms/step - accuracy: 0.9688 - loss: 0.5569 - val_accuracy: 0.9706 - val_loss: 0.4087 Epoch 28: early stopping Restoring model weights from the end of the best epoch: 18. Tiempo transcurrido para el entrenamiento: 4834.694913864136 segundos Uso de CPU durante el entrenamiento: 36.300000000000004% Aumento en uso de memoria: 0.8008613586425781 GB Resultados para lr=0.0001, l2=0.01, batch_size=64: Tiempo: 4834.694913864136 segundos, CPU: 36.300000000000004%, Memoria: 0.8008613586425781 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_9"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_4 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_4 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_4 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.3164 - loss: 15.9286 Epoch 1: val_accuracy improved from -inf to 0.77174, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 352s 2s/step - accuracy: 0.3175 - loss: 15.8886 - val_accuracy: 0.7717 - val_loss: 3.0940 Epoch 2/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.9375 - loss: 0.9383 Epoch 2: val_accuracy improved from 0.77174 to 1.00000, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.9375 - loss: 0.9383 - val_accuracy: 1.0000 - val_loss: 0.3275 Epoch 3/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.8018 - loss: 2.7480 Epoch 3: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 347s 2s/step - accuracy: 0.8020 - loss: 2.7454 - val_accuracy: 0.8492 - val_loss: 2.1670 Epoch 4/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:08 2s/step - accuracy: 0.7500 - loss: 2.1549 Epoch 4: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.7500 - loss: 2.1549 - val_accuracy: 1.0000 - val_loss: 0.3364 Epoch 5/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9036 - loss: 1.3884 Epoch 5: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 342s 2s/step - accuracy: 0.9036 - loss: 1.3890 - val_accuracy: 0.8845 - val_loss: 1.6293 Epoch 6/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:08 2s/step - accuracy: 1.0000 - loss: 0.3511 Epoch 6: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 1.0000 - loss: 0.3511 - val_accuracy: 1.0000 - val_loss: 0.3293 Epoch 7/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9175 - loss: 1.1924 Epoch 7: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 343s 2s/step - accuracy: 0.9175 - loss: 1.1918 - val_accuracy: 0.9253 - val_loss: 1.1174 Epoch 8/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:13 2s/step - accuracy: 0.6875 - loss: 1.6990 Epoch 8: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.6875 - loss: 1.6990 - val_accuracy: 1.0000 - val_loss: 0.3300 Epoch 9/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9415 - loss: 0.8481 Epoch 9: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 341s 2s/step - accuracy: 0.9415 - loss: 0.8484 - val_accuracy: 0.9416 - val_loss: 1.0386 Epoch 10/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:18 2s/step - accuracy: 0.9375 - loss: 1.4181 Epoch 10: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.9375 - loss: 1.4181 - val_accuracy: 1.0000 - val_loss: 0.3307 Epoch 11/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9387 - loss: 0.9345 Epoch 11: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 341s 2s/step - accuracy: 0.9388 - loss: 0.9342 - val_accuracy: 0.9484 - val_loss: 0.7660 Epoch 12/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:08 2s/step - accuracy: 0.8750 - loss: 0.5370 Epoch 12: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.8750 - loss: 0.5370 - val_accuracy: 1.0000 - val_loss: 0.3313 Epoch 12: early stopping Restoring model weights from the end of the best epoch: 2. Tiempo transcurrido para el entrenamiento: 2077.7854232788086 segundos Uso de CPU durante el entrenamiento: 1.5% Aumento en uso de memoria: 0.2444610595703125 GB Resultados para lr=0.0001, l2=0.05, batch_size=16: Tiempo: 2077.7854232788086 segundos, CPU: 1.5%, Memoria: 0.2444610595703125 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_11"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_5 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_5 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_5 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.2749 - loss: 17.9757 Epoch 1: val_accuracy improved from -inf to 0.69565, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 351s 4s/step - accuracy: 0.2771 - loss: 17.8896 - val_accuracy: 0.6957 - val_loss: 3.9034 Epoch 2/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:02 3s/step - accuracy: 0.7188 - loss: 5.1091 Epoch 2: val_accuracy improved from 0.69565 to 1.00000, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 8ms/step - accuracy: 0.7188 - loss: 5.1091 - val_accuracy: 1.0000 - val_loss: 0.3802 Epoch 3/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.7577 - loss: 3.2050 Epoch 3: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 352s 4s/step - accuracy: 0.7580 - loss: 3.1992 - val_accuracy: 0.8098 - val_loss: 2.3410 Epoch 4/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:09 3s/step - accuracy: 0.8750 - loss: 2.6156 Epoch 4: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 0.8750 - loss: 2.6156 - val_accuracy: 1.0000 - val_loss: 0.3446 Epoch 5/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.8479 - loss: 1.9547 Epoch 5: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 345s 4s/step - accuracy: 0.8481 - loss: 1.9520 - val_accuracy: 0.8859 - val_loss: 1.5579 Epoch 6/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:03 3s/step - accuracy: 0.9375 - loss: 0.6433 Epoch 6: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.9375 - loss: 0.6433 - val_accuracy: 1.0000 - val_loss: 0.3449 Epoch 7/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9015 - loss: 1.2463 Epoch 7: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 348s 4s/step - accuracy: 0.9015 - loss: 1.2457 - val_accuracy: 0.9090 - val_loss: 1.2508 Epoch 8/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:05 3s/step - accuracy: 0.9062 - loss: 1.0065 Epoch 8: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.9062 - loss: 1.0065 - val_accuracy: 1.0000 - val_loss: 0.3449 Epoch 9/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9299 - loss: 1.0240 Epoch 9: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 350s 4s/step - accuracy: 0.9299 - loss: 1.0245 - val_accuracy: 0.9062 - val_loss: 1.3369 Epoch 10/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:02 3s/step - accuracy: 0.9375 - loss: 1.0571 Epoch 10: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.9375 - loss: 1.0571 - val_accuracy: 1.0000 - val_loss: 0.3448 Epoch 11/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9457 - loss: 0.8487 Epoch 11: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 349s 4s/step - accuracy: 0.9456 - loss: 0.8494 - val_accuracy: 0.9457 - val_loss: 0.8874 Epoch 12/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 5:48 4s/step - accuracy: 0.9062 - loss: 1.3641 Epoch 12: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 5s 5ms/step - accuracy: 0.9062 - loss: 1.3641 - val_accuracy: 0.5000 - val_loss: 2.0930 Epoch 12: early stopping Restoring model weights from the end of the best epoch: 2. Tiempo transcurrido para el entrenamiento: 2118.0271093845367 segundos Uso de CPU durante el entrenamiento: 25.19999999999999% Aumento en uso de memoria: -0.6207160949707031 GB Resultados para lr=0.0001, l2=0.05, batch_size=32: Tiempo: 2118.0271093845367 segundos, CPU: 25.19999999999999%, Memoria: -0.6207160949707031 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_13"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_6 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_6 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_6 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.1886 - loss: 19.5179 Epoch 1: val_accuracy improved from -inf to 0.60795, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 345s 8s/step - accuracy: 0.1921 - loss: 19.3752 - val_accuracy: 0.6080 - val_loss: 5.6143 Epoch 2/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:01 6s/step - accuracy: 0.5781 - loss: 5.2795 Epoch 2: val_accuracy did not improve from 0.60795 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 99ms/step - accuracy: 0.5781 - loss: 5.2795 - val_accuracy: 0.5588 - val_loss: 6.2684 Epoch 3/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.6944 - loss: 4.2246 Epoch 3: val_accuracy improved from 0.60795 to 0.76278, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 339s 8s/step - accuracy: 0.6952 - loss: 4.2127 - val_accuracy: 0.7628 - val_loss: 2.8212 Epoch 4/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:57 6s/step - accuracy: 0.7500 - loss: 3.9552 Epoch 4: val_accuracy improved from 0.76278 to 0.82353, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 104ms/step - accuracy: 0.7500 - loss: 3.9552 - val_accuracy: 0.8235 - val_loss: 2.4051 Epoch 5/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.8005 - loss: 2.4941 Epoch 5: val_accuracy improved from 0.82353 to 0.83949, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 350s 9s/step - accuracy: 0.8010 - loss: 2.4907 - val_accuracy: 0.8395 - val_loss: 2.1663 Epoch 6/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:06 6s/step - accuracy: 0.7812 - loss: 2.9902 Epoch 6: val_accuracy did not improve from 0.83949 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 91ms/step - accuracy: 0.7812 - loss: 2.9902 - val_accuracy: 0.7941 - val_loss: 2.3284 Epoch 7/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.8697 - loss: 1.6594 Epoch 7: val_accuracy improved from 0.83949 to 0.88778, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 355s 9s/step - accuracy: 0.8699 - loss: 1.6572 - val_accuracy: 0.8878 - val_loss: 1.6590 Epoch 8/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:12 6s/step - accuracy: 0.8906 - loss: 1.3438 Epoch 8: val_accuracy did not improve from 0.88778 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 94ms/step - accuracy: 0.8906 - loss: 1.3438 - val_accuracy: 0.8529 - val_loss: 3.2193 Epoch 9/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.8963 - loss: 1.3143 Epoch 9: val_accuracy improved from 0.88778 to 0.90341, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.8963 - loss: 1.3148 - val_accuracy: 0.9034 - val_loss: 1.2454 Epoch 10/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:57 6s/step - accuracy: 0.8906 - loss: 1.2109 Epoch 10: val_accuracy improved from 0.90341 to 0.97059, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 105ms/step - accuracy: 0.8906 - loss: 1.2109 - val_accuracy: 0.9706 - val_loss: 0.4164 Epoch 11/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.9270 - loss: 0.9843 Epoch 11: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.9267 - loss: 0.9888 - val_accuracy: 0.9162 - val_loss: 1.1808 Epoch 12/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:11 6s/step - accuracy: 0.9844 - loss: 0.4208 Epoch 12: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 89ms/step - accuracy: 0.9844 - loss: 0.4208 - val_accuracy: 0.9706 - val_loss: 0.4645 Epoch 13/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9348 - loss: 0.9616 Epoch 13: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.9347 - loss: 0.9604 - val_accuracy: 0.9304 - val_loss: 0.9586 Epoch 14/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:10 6s/step - accuracy: 0.9688 - loss: 0.4415 Epoch 14: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 88ms/step - accuracy: 0.9688 - loss: 0.4415 - val_accuracy: 0.9118 - val_loss: 1.2959 Epoch 15/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9431 - loss: 0.9182 Epoch 15: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.9431 - loss: 0.9169 - val_accuracy: 0.9219 - val_loss: 1.1502 Epoch 16/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:54 6s/step - accuracy: 0.9219 - loss: 0.8773 Epoch 16: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 89ms/step - accuracy: 0.9219 - loss: 0.8773 - val_accuracy: 0.8824 - val_loss: 2.1477 Epoch 17/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9428 - loss: 0.7785 Epoch 17: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.9428 - loss: 0.7779 - val_accuracy: 0.9375 - val_loss: 0.9925 Epoch 18/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:58 6s/step - accuracy: 0.9844 - loss: 0.5313 Epoch 18: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 85ms/step - accuracy: 0.9844 - loss: 0.5313 - val_accuracy: 0.9118 - val_loss: 2.3550 Epoch 19/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.9493 - loss: 0.7955 Epoch 19: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 344s 8s/step - accuracy: 0.9494 - loss: 0.7945 - val_accuracy: 0.9403 - val_loss: 0.8397 Epoch 20/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:04 6s/step - accuracy: 0.9688 - loss: 0.5107 Epoch 20: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 86ms/step - accuracy: 0.9688 - loss: 0.5107 - val_accuracy: 0.9118 - val_loss: 0.7869 Epoch 20: early stopping Restoring model weights from the end of the best epoch: 10. Tiempo transcurrido para el entrenamiento: 3519.708744764328 segundos Uso de CPU durante el entrenamiento: 34.2% Aumento en uso de memoria: 0.21001052856445312 GB Resultados para lr=0.0001, l2=0.05, batch_size=64: Tiempo: 3519.708744764328 segundos, CPU: 34.2%, Memoria: 0.21001052856445312 GB Precisión de validación: 0.970588207244873 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_15"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_7 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_7 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_7 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.3288 - loss: 14.9486 Epoch 1: val_accuracy improved from -inf to 0.78397, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 351s 2s/step - accuracy: 0.3300 - loss: 14.9123 - val_accuracy: 0.7840 - val_loss: 2.6475 Epoch 2/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:29 2s/step - accuracy: 0.6875 - loss: 3.8685 Epoch 2: val_accuracy did not improve from 0.78397 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.6875 - loss: 3.8685 - val_accuracy: 0.5000 - val_loss: 13.7053 Epoch 3/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.8037 - loss: 2.5107 Epoch 3: val_accuracy improved from 0.78397 to 0.86821, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 350s 2s/step - accuracy: 0.8038 - loss: 2.5097 - val_accuracy: 0.8682 - val_loss: 1.8337 Epoch 4/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:26 2s/step - accuracy: 0.7500 - loss: 3.8194 Epoch 4: val_accuracy improved from 0.86821 to 1.00000, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.7500 - loss: 3.8194 - val_accuracy: 1.0000 - val_loss: 0.3340 Epoch 5/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.8873 - loss: 1.5738 Epoch 5: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 349s 2s/step - accuracy: 0.8873 - loss: 1.5739 - val_accuracy: 0.9062 - val_loss: 1.2455 Epoch 6/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 5:00 2s/step - accuracy: 0.8125 - loss: 3.1790 Epoch 6: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.8125 - loss: 3.1790 - val_accuracy: 1.0000 - val_loss: 0.3335 Epoch 7/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9216 - loss: 1.1913 Epoch 7: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 348s 2s/step - accuracy: 0.9216 - loss: 1.1910 - val_accuracy: 0.9280 - val_loss: 1.2943 Epoch 8/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:22 2s/step - accuracy: 1.0000 - loss: 0.3343 Epoch 8: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 1.0000 - loss: 0.3343 - val_accuracy: 1.0000 - val_loss: 0.3343 Epoch 9/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9439 - loss: 0.8334 Epoch 9: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 349s 2s/step - accuracy: 0.9439 - loss: 0.8338 - val_accuracy: 0.9226 - val_loss: 1.2754 Epoch 10/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:18 2s/step - accuracy: 1.0000 - loss: 0.3348 Epoch 10: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 1.0000 - loss: 0.3348 - val_accuracy: 0.5000 - val_loss: 3.9811 Epoch 11/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9411 - loss: 0.9372 Epoch 11: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 359s 2s/step - accuracy: 0.9411 - loss: 0.9368 - val_accuracy: 0.9361 - val_loss: 1.0591 Epoch 12/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:32 2s/step - accuracy: 1.0000 - loss: 0.3353 Epoch 12: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 1.0000 - loss: 0.3353 - val_accuracy: 1.0000 - val_loss: 0.3353 Epoch 13/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9672 - loss: 0.6771 Epoch 13: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 349s 2s/step - accuracy: 0.9672 - loss: 0.6773 - val_accuracy: 0.9484 - val_loss: 0.8308 Epoch 14/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:13 2s/step - accuracy: 0.9375 - loss: 1.1467 Epoch 14: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.9375 - loss: 1.1467 - val_accuracy: 0.5000 - val_loss: 5.1069 Epoch 14: early stopping Restoring model weights from the end of the best epoch: 4. Tiempo transcurrido para el entrenamiento: 2501.10480260849 segundos Uso de CPU durante el entrenamiento: 36.00000000000001% Aumento en uso de memoria: -0.6569480895996094 GB Resultados para lr=0.0001, l2=0.1, batch_size=16: Tiempo: 2501.10480260849 segundos, CPU: 36.00000000000001%, Memoria: -0.6569480895996094 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_17"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_8 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_8 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_8 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.2459 - loss: 18.2191 Epoch 1: val_accuracy improved from -inf to 0.71467, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 345s 4s/step - accuracy: 0.2481 - loss: 18.1370 - val_accuracy: 0.7147 - val_loss: 3.9515 Epoch 2/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:20 3s/step - accuracy: 0.6562 - loss: 4.0978 Epoch 2: val_accuracy did not improve from 0.71467 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - accuracy: 0.6562 - loss: 4.0978 - val_accuracy: 0.5000 - val_loss: 1.6435 Epoch 3/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.7742 - loss: 3.2432 Epoch 3: val_accuracy improved from 0.71467 to 0.82609, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 345s 4s/step - accuracy: 0.7744 - loss: 3.2406 - val_accuracy: 0.8261 - val_loss: 1.9856 Epoch 4/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:13 3s/step - accuracy: 0.8750 - loss: 1.7970 Epoch 4: val_accuracy improved from 0.82609 to 1.00000, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 7ms/step - accuracy: 0.8750 - loss: 1.7970 - val_accuracy: 1.0000 - val_loss: 0.3507 Epoch 5/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.8541 - loss: 2.0354 Epoch 5: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 343s 4s/step - accuracy: 0.8543 - loss: 2.0326 - val_accuracy: 0.8682 - val_loss: 1.9640 Epoch 6/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:39 3s/step - accuracy: 0.8125 - loss: 1.9255 Epoch 6: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - accuracy: 0.8125 - loss: 1.9255 - val_accuracy: 1.0000 - val_loss: 0.3443 Epoch 7/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.8920 - loss: 1.3511 Epoch 7: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 342s 4s/step - accuracy: 0.8921 - loss: 1.3507 - val_accuracy: 0.9144 - val_loss: 1.1453 Epoch 8/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:33 3s/step - accuracy: 0.9062 - loss: 2.5067 Epoch 8: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - accuracy: 0.9062 - loss: 2.5067 - val_accuracy: 1.0000 - val_loss: 0.3443 Epoch 9/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9188 - loss: 1.1145 Epoch 9: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 349s 4s/step - accuracy: 0.9189 - loss: 1.1132 - val_accuracy: 0.9253 - val_loss: 1.0544 Epoch 10/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:27 3s/step - accuracy: 1.0000 - loss: 0.3586 Epoch 10: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 5ms/step - accuracy: 1.0000 - loss: 0.3586 - val_accuracy: 1.0000 - val_loss: 0.3443 Epoch 11/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9408 - loss: 0.8936 Epoch 11: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 342s 4s/step - accuracy: 0.9407 - loss: 0.8933 - val_accuracy: 0.9443 - val_loss: 0.9498 Epoch 12/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:07 3s/step - accuracy: 0.9688 - loss: 0.5014 Epoch 12: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 0.9688 - loss: 0.5014 - val_accuracy: 1.0000 - val_loss: 0.3453 Epoch 13/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9487 - loss: 0.8089 Epoch 13: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 342s 4s/step - accuracy: 0.9487 - loss: 0.8086 - val_accuracy: 0.9253 - val_loss: 1.0922 Epoch 14/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:11 3s/step - accuracy: 0.9688 - loss: 0.5556 Epoch 14: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 0.9688 - loss: 0.5556 - val_accuracy: 1.0000 - val_loss: 0.4898 Epoch 14: early stopping Restoring model weights from the end of the best epoch: 4. Tiempo transcurrido para el entrenamiento: 2471.026523590088 segundos Uso de CPU durante el entrenamiento: 58.49999999999999% Aumento en uso de memoria: 0.4023284912109375 GB Resultados para lr=0.0001, l2=0.1, batch_size=32: Tiempo: 2471.026523590088 segundos, CPU: 58.49999999999999%, Memoria: 0.4023284912109375 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_19"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_9 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_9 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_9 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.1955 - loss: 20.0114 Epoch 1: val_accuracy improved from -inf to 0.61364, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 340s 8s/step - accuracy: 0.1991 - loss: 19.8526 - val_accuracy: 0.6136 - val_loss: 5.2234 Epoch 2/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:04 6s/step - accuracy: 0.6250 - loss: 5.4743 Epoch 2: val_accuracy did not improve from 0.61364 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 88ms/step - accuracy: 0.6250 - loss: 5.4743 - val_accuracy: 0.5882 - val_loss: 4.2118 Epoch 3/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.7038 - loss: 4.1146 Epoch 3: val_accuracy improved from 0.61364 to 0.74858, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 332s 8s/step - accuracy: 0.7045 - loss: 4.1037 - val_accuracy: 0.7486 - val_loss: 3.3856 Epoch 4/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:55 6s/step - accuracy: 0.7500 - loss: 2.9660 Epoch 4: val_accuracy did not improve from 0.74858 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 88ms/step - accuracy: 0.7500 - loss: 2.9660 - val_accuracy: 0.7353 - val_loss: 5.2887 Epoch 5/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.8059 - loss: 2.7076 Epoch 5: val_accuracy improved from 0.74858 to 0.82955, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 333s 8s/step - accuracy: 0.8061 - loss: 2.7023 - val_accuracy: 0.8295 - val_loss: 2.3313 Epoch 6/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:08 6s/step - accuracy: 0.9062 - loss: 1.0695 Epoch 6: val_accuracy improved from 0.82955 to 0.88235, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 92ms/step - accuracy: 0.9062 - loss: 1.0695 - val_accuracy: 0.8824 - val_loss: 1.6858 Epoch 7/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.8726 - loss: 1.6398 Epoch 7: val_accuracy did not improve from 0.88235 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.8727 - loss: 1.6389 - val_accuracy: 0.8679 - val_loss: 1.8297 Epoch 8/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:58 6s/step - accuracy: 0.9219 - loss: 1.4198 Epoch 8: val_accuracy did not improve from 0.88235 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 95ms/step - accuracy: 0.9219 - loss: 1.4198 - val_accuracy: 0.8529 - val_loss: 1.1923 Epoch 9/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.9035 - loss: 1.2833 Epoch 9: val_accuracy improved from 0.88235 to 0.91051, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 348s 9s/step - accuracy: 0.9033 - loss: 1.2849 - val_accuracy: 0.9105 - val_loss: 1.2575 Epoch 10/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:13 7s/step - accuracy: 0.9375 - loss: 1.1384 Epoch 10: val_accuracy did not improve from 0.91051 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 87ms/step - accuracy: 0.9375 - loss: 1.1384 - val_accuracy: 0.8529 - val_loss: 2.2396 Epoch 11/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.9174 - loss: 1.1176 Epoch 11: val_accuracy did not improve from 0.91051 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.9175 - loss: 1.1172 - val_accuracy: 0.9077 - val_loss: 1.2359 Epoch 12/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:01 6s/step - accuracy: 0.9531 - loss: 0.7729 Epoch 12: val_accuracy did not improve from 0.91051 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 96ms/step - accuracy: 0.9531 - loss: 0.7729 - val_accuracy: 0.8235 - val_loss: 1.5957 Epoch 13/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9211 - loss: 1.0042 Epoch 13: val_accuracy improved from 0.91051 to 0.92472, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.9212 - loss: 1.0029 - val_accuracy: 0.9247 - val_loss: 0.9894 Epoch 14/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:03 6s/step - accuracy: 0.8906 - loss: 1.5692 Epoch 14: val_accuracy did not improve from 0.92472 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 87ms/step - accuracy: 0.8906 - loss: 1.5692 - val_accuracy: 0.8529 - val_loss: 2.5456 Epoch 15/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9359 - loss: 0.8661 Epoch 15: val_accuracy did not improve from 0.92472 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.9360 - loss: 0.8662 - val_accuracy: 0.9190 - val_loss: 1.1437 Epoch 16/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:13 7s/step - accuracy: 0.9375 - loss: 0.7110 Epoch 16: val_accuracy improved from 0.92472 to 0.97059, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 97ms/step - accuracy: 0.9375 - loss: 0.7110 - val_accuracy: 0.9706 - val_loss: 0.7691 Epoch 17/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9349 - loss: 0.9262 Epoch 17: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.9351 - loss: 0.9235 - val_accuracy: 0.9261 - val_loss: 1.0347 Epoch 18/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:24 7s/step - accuracy: 0.9375 - loss: 0.8083 Epoch 18: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 95ms/step - accuracy: 0.9375 - loss: 0.8083 - val_accuracy: 0.9706 - val_loss: 0.5607 Epoch 19/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9601 - loss: 0.6419 Epoch 19: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.9599 - loss: 0.6436 - val_accuracy: 0.9389 - val_loss: 0.8187 Epoch 20/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:01 6s/step - accuracy: 0.9531 - loss: 0.5130 Epoch 20: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 86ms/step - accuracy: 0.9531 - loss: 0.5130 - val_accuracy: 0.8824 - val_loss: 1.0313 Epoch 21/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.9520 - loss: 0.7499 Epoch 21: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.9520 - loss: 0.7493 - val_accuracy: 0.9574 - val_loss: 0.7244 Epoch 22/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:10 6s/step - accuracy: 0.9688 - loss: 0.4707 Epoch 22: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 85ms/step - accuracy: 0.9688 - loss: 0.4707 - val_accuracy: 0.8824 - val_loss: 2.6402 Epoch 23/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.9643 - loss: 0.6365 Epoch 23: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 343s 9s/step - accuracy: 0.9642 - loss: 0.6369 - val_accuracy: 0.9361 - val_loss: 1.0553 Epoch 24/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:13 7s/step - accuracy: 0.9531 - loss: 1.1044 Epoch 24: val_accuracy improved from 0.97059 to 1.00000, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 93ms/step - accuracy: 0.9531 - loss: 1.1044 - val_accuracy: 1.0000 - val_loss: 0.3498 Epoch 25/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9611 - loss: 0.7056 Epoch 25: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 332s 8s/step - accuracy: 0.9612 - loss: 0.7053 - val_accuracy: 0.9503 - val_loss: 0.8549 Epoch 26/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:57 6s/step - accuracy: 0.9844 - loss: 0.3862 Epoch 26: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 83ms/step - accuracy: 0.9844 - loss: 0.3862 - val_accuracy: 0.9118 - val_loss: 1.3191 Epoch 27/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9657 - loss: 0.6039 Epoch 27: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 332s 8s/step - accuracy: 0.9658 - loss: 0.6030 - val_accuracy: 0.9545 - val_loss: 0.8867 Epoch 28/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:07 6s/step - accuracy: 0.9688 - loss: 0.4996 Epoch 28: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 88ms/step - accuracy: 0.9688 - loss: 0.4996 - val_accuracy: 0.9706 - val_loss: 0.4845 Epoch 29/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9643 - loss: 0.5789 Epoch 29: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.9644 - loss: 0.5786 - val_accuracy: 0.9489 - val_loss: 0.8452 Epoch 30/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:55 6s/step - accuracy: 0.9844 - loss: 0.3806 Epoch 30: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 83ms/step - accuracy: 0.9844 - loss: 0.3806 - val_accuracy: 1.0000 - val_loss: 0.3484 Epoch 31/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9774 - loss: 0.5311 Epoch 31: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 333s 8s/step - accuracy: 0.9774 - loss: 0.5315 - val_accuracy: 0.9545 - val_loss: 0.7508 Epoch 32/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:05 6s/step - accuracy: 0.9688 - loss: 0.4833 Epoch 32: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 97ms/step - accuracy: 0.9688 - loss: 0.4833 - val_accuracy: 0.9706 - val_loss: 0.4368 Epoch 33/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9741 - loss: 0.5761 Epoch 33: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.9741 - loss: 0.5749 - val_accuracy: 0.9645 - val_loss: 0.6990 Epoch 34/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:59 6s/step - accuracy: 0.9531 - loss: 0.7288 Epoch 34: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 88ms/step - accuracy: 0.9531 - loss: 0.7288 - val_accuracy: 0.9412 - val_loss: 0.8989 Epoch 34: early stopping Restoring model weights from the end of the best epoch: 24. Tiempo transcurrido para el entrenamiento: 5893.052283287048 segundos Uso de CPU durante el entrenamiento: 17.799999999999997% Aumento en uso de memoria: 0.2658805847167969 GB Resultados para lr=0.0001, l2=0.1, batch_size=64: Tiempo: 5893.052283287048 segundos, CPU: 17.799999999999997%, Memoria: 0.2658805847167969 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_21"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_10 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_10 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_10 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5546 - loss: 12.8082 Epoch 1: val_accuracy improved from -inf to 0.89538, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 359s 2s/step - accuracy: 0.5557 - loss: 12.7727 - val_accuracy: 0.8954 - val_loss: 2.8618 Epoch 2/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:37 2s/step - accuracy: 0.8750 - loss: 1.4841 Epoch 2: val_accuracy did not improve from 0.89538 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.8750 - loss: 1.4841 - val_accuracy: 0.5000 - val_loss: 11.1557 Epoch 3/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.8976 - loss: 2.6548 Epoch 3: val_accuracy improved from 0.89538 to 0.92391, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 351s 2s/step - accuracy: 0.8977 - loss: 2.6542 - val_accuracy: 0.9239 - val_loss: 2.0025 Epoch 4/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:27 2s/step - accuracy: 1.0000 - loss: 0.3994 Epoch 4: val_accuracy improved from 0.92391 to 1.00000, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 1.0000 - loss: 0.3994 - val_accuracy: 1.0000 - val_loss: 0.3994 Epoch 5/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9292 - loss: 2.0561 Epoch 5: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 350s 2s/step - accuracy: 0.9292 - loss: 2.0567 - val_accuracy: 0.9416 - val_loss: 2.2820 Epoch 6/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:24 2s/step - accuracy: 0.9375 - loss: 0.6514 Epoch 6: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.9375 - loss: 0.6514 - val_accuracy: 1.0000 - val_loss: 0.4584 Epoch 7/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9354 - loss: 2.1966 Epoch 7: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 350s 2s/step - accuracy: 0.9354 - loss: 2.1960 - val_accuracy: 0.9226 - val_loss: 2.8116 Epoch 8/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:16 2s/step - accuracy: 0.9375 - loss: 2.3100 Epoch 8: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.9375 - loss: 2.3100 - val_accuracy: 1.0000 - val_loss: 0.5222 Epoch 9/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9501 - loss: 1.9485 Epoch 9: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 347s 2s/step - accuracy: 0.9501 - loss: 1.9479 - val_accuracy: 0.9361 - val_loss: 2.8034 Epoch 10/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:05 2s/step - accuracy: 1.0000 - loss: 0.5833 Epoch 10: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 1.0000 - loss: 0.5833 - val_accuracy: 1.0000 - val_loss: 0.5837 Epoch 11/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9647 - loss: 1.7192 Epoch 11: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 343s 2s/step - accuracy: 0.9647 - loss: 1.7187 - val_accuracy: 0.9484 - val_loss: 2.5617 Epoch 12/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:35 2s/step - accuracy: 0.9375 - loss: 2.8202 Epoch 12: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.9375 - loss: 2.8202 - val_accuracy: 1.0000 - val_loss: 0.6315 Epoch 13/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9584 - loss: 2.0239 Epoch 13: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 344s 2s/step - accuracy: 0.9584 - loss: 2.0232 - val_accuracy: 0.9361 - val_loss: 3.0993 Epoch 14/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:08 2s/step - accuracy: 1.0000 - loss: 0.6936 Epoch 14: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 1.0000 - loss: 0.6936 - val_accuracy: 1.0000 - val_loss: 0.6940 Epoch 14: early stopping Restoring model weights from the end of the best epoch: 4. Tiempo transcurrido para el entrenamiento: 2458.760917901993 segundos Uso de CPU durante el entrenamiento: 22.60000000000001% Aumento en uso de memoria: -0.29473114013671875 GB Resultados para lr=0.0005, l2=0.01, batch_size=16: Tiempo: 2458.760917901993 segundos, CPU: 22.60000000000001%, Memoria: -0.29473114013671875 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_23"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_11 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_11 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_11 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5144 - loss: 14.6996 Epoch 1: val_accuracy improved from -inf to 0.88179, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 350s 4s/step - accuracy: 0.5167 - loss: 14.6152 - val_accuracy: 0.8818 - val_loss: 2.9613 Epoch 2/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:12 3s/step - accuracy: 0.9688 - loss: 1.7583 Epoch 2: val_accuracy improved from 0.88179 to 1.00000, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 7ms/step - accuracy: 0.9688 - loss: 1.7583 - val_accuracy: 1.0000 - val_loss: 0.3864 Epoch 3/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9136 - loss: 1.9025 Epoch 3: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 346s 4s/step - accuracy: 0.9136 - loss: 1.9027 - val_accuracy: 0.9307 - val_loss: 2.0287 Epoch 4/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:08 3s/step - accuracy: 0.8750 - loss: 1.3993 Epoch 4: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.8750 - loss: 1.3993 - val_accuracy: 0.5000 - val_loss: 5.4832 Epoch 5/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9349 - loss: 1.3461 Epoch 5: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 344s 4s/step - accuracy: 0.9348 - loss: 1.3497 - val_accuracy: 0.9280 - val_loss: 1.9616 Epoch 6/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:12 3s/step - accuracy: 0.8750 - loss: 1.5736 Epoch 6: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.8750 - loss: 1.5736 - val_accuracy: 1.0000 - val_loss: 0.3932 Epoch 7/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9467 - loss: 1.3602 Epoch 7: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 342s 4s/step - accuracy: 0.9467 - loss: 1.3599 - val_accuracy: 0.9606 - val_loss: 1.3207 Epoch 8/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:18 3s/step - accuracy: 1.0000 - loss: 0.4216 Epoch 8: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - accuracy: 1.0000 - loss: 0.4216 - val_accuracy: 1.0000 - val_loss: 0.4220 Epoch 9/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9514 - loss: 1.2499 Epoch 9: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 343s 4s/step - accuracy: 0.9514 - loss: 1.2513 - val_accuracy: 0.9280 - val_loss: 2.4269 Epoch 10/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:05 3s/step - accuracy: 0.9688 - loss: 0.5192 Epoch 10: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.9688 - loss: 0.5192 - val_accuracy: 1.0000 - val_loss: 0.4482 Epoch 11/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9588 - loss: 1.5175 Epoch 11: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 343s 4s/step - accuracy: 0.9587 - loss: 1.5186 - val_accuracy: 0.9497 - val_loss: 1.5621 Epoch 12/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:15 3s/step - accuracy: 1.0000 - loss: 0.4738 Epoch 12: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 1.0000 - loss: 0.4738 - val_accuracy: 1.0000 - val_loss: 0.4744 Epoch 12: early stopping Restoring model weights from the end of the best epoch: 2. Tiempo transcurrido para el entrenamiento: 2090.6789672374725 segundos Uso de CPU durante el entrenamiento: 28.900000000000006% Aumento en uso de memoria: -0.15079116821289062 GB Resultados para lr=0.0005, l2=0.01, batch_size=32: Tiempo: 2090.6789672374725 segundos, CPU: 28.900000000000006%, Memoria: -0.15079116821289062 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_25"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_12 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_12 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_12 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.4318 - loss: 14.4941 Epoch 1: val_accuracy improved from -inf to 0.86790, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 344s 8s/step - accuracy: 0.4373 - loss: 14.3318 - val_accuracy: 0.8679 - val_loss: 2.4264 Epoch 2/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:11 6s/step - accuracy: 0.9062 - loss: 2.3292 Epoch 2: val_accuracy improved from 0.86790 to 0.94118, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 98ms/step - accuracy: 0.9062 - loss: 2.3292 - val_accuracy: 0.9412 - val_loss: 0.5288 Epoch 3/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.9010 - loss: 2.0859 Epoch 3: val_accuracy did not improve from 0.94118 40/40 ━━━━━━━━━━━━━━━━━━━━ 351s 9s/step - accuracy: 0.9014 - loss: 2.0799 - val_accuracy: 0.9020 - val_loss: 1.8627 Epoch 4/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:29 7s/step - accuracy: 0.9531 - loss: 3.0323 Epoch 4: val_accuracy did not improve from 0.94118 40/40 ━━━━━━━━━━━━━━━━━━━━ 11s 97ms/step - accuracy: 0.9531 - loss: 3.0323 - val_accuracy: 0.8824 - val_loss: 2.4457 Epoch 5/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9297 - loss: 1.5168 Epoch 5: val_accuracy did not improve from 0.94118 40/40 ━━━━━━━━━━━━━━━━━━━━ 333s 8s/step - accuracy: 0.9297 - loss: 1.5194 - val_accuracy: 0.9219 - val_loss: 1.6076 Epoch 6/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:08 6s/step - accuracy: 0.9219 - loss: 1.4438 Epoch 6: val_accuracy did not improve from 0.94118 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 84ms/step - accuracy: 0.9219 - loss: 1.4438 - val_accuracy: 0.8529 - val_loss: 2.5131 Epoch 7/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9493 - loss: 1.2149 Epoch 7: val_accuracy improved from 0.94118 to 0.94602, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 332s 8s/step - accuracy: 0.9495 - loss: 1.2123 - val_accuracy: 0.9460 - val_loss: 1.4304 Epoch 8/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:53 6s/step - accuracy: 0.9688 - loss: 0.9532 Epoch 8: val_accuracy improved from 0.94602 to 0.97059, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 93ms/step - accuracy: 0.9688 - loss: 0.9532 - val_accuracy: 0.9706 - val_loss: 0.6769 Epoch 9/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9726 - loss: 0.8028 Epoch 9: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 333s 8s/step - accuracy: 0.9725 - loss: 0.8034 - val_accuracy: 0.9545 - val_loss: 1.0530 Epoch 10/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:08 6s/step - accuracy: 0.9844 - loss: 0.7241 Epoch 10: val_accuracy improved from 0.97059 to 1.00000, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 98ms/step - accuracy: 0.9844 - loss: 0.7241 - val_accuracy: 1.0000 - val_loss: 0.4050 Epoch 11/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9716 - loss: 0.7662 Epoch 11: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 335s 8s/step - accuracy: 0.9716 - loss: 0.7686 - val_accuracy: 0.9389 - val_loss: 1.3013 Epoch 12/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:56 6s/step - accuracy: 0.9375 - loss: 1.3103 Epoch 12: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 86ms/step - accuracy: 0.9375 - loss: 1.3103 - val_accuracy: 0.9706 - val_loss: 2.8014 Epoch 13/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9732 - loss: 0.7986 Epoch 13: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 331s 8s/step - accuracy: 0.9730 - loss: 0.8015 - val_accuracy: 0.9489 - val_loss: 1.3798 Epoch 14/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:55 6s/step - accuracy: 0.9688 - loss: 1.5022 Epoch 14: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 95ms/step - accuracy: 0.9688 - loss: 1.5022 - val_accuracy: 0.9706 - val_loss: 1.3243 Epoch 15/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.9683 - loss: 0.7995 Epoch 15: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.9683 - loss: 0.8012 - val_accuracy: 0.9616 - val_loss: 1.3530 Epoch 16/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:58 6s/step - accuracy: 0.9688 - loss: 0.5179 Epoch 16: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 84ms/step - accuracy: 0.9688 - loss: 0.5179 - val_accuracy: 0.9706 - val_loss: 0.4562 Epoch 17/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9647 - loss: 0.9873 Epoch 17: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 330s 8s/step - accuracy: 0.9647 - loss: 0.9888 - val_accuracy: 0.9574 - val_loss: 1.4293 Epoch 18/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:31 7s/step - accuracy: 0.9219 - loss: 1.1328 Epoch 18: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 88ms/step - accuracy: 0.9219 - loss: 1.1328 - val_accuracy: 0.9706 - val_loss: 0.7354 Epoch 19/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9726 - loss: 0.7761 Epoch 19: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 331s 8s/step - accuracy: 0.9727 - loss: 0.7782 - val_accuracy: 0.9645 - val_loss: 1.0774 Epoch 20/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:11 6s/step - accuracy: 0.9844 - loss: 0.9805 Epoch 20: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 89ms/step - accuracy: 0.9844 - loss: 0.9805 - val_accuracy: 0.9706 - val_loss: 0.7290 Epoch 20: early stopping Restoring model weights from the end of the best epoch: 10. Tiempo transcurrido para el entrenamiento: 3458.426070213318 segundos Uso de CPU durante el entrenamiento: 30.599999999999994% Aumento en uso de memoria: 0.24773025512695312 GB Resultados para lr=0.0005, l2=0.01, batch_size=64: Tiempo: 3458.426070213318 segundos, CPU: 30.599999999999994%, Memoria: 0.24773025512695312 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_27"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_13 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_13 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_13 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5648 - loss: 13.0838 Epoch 1: val_accuracy improved from -inf to 0.88859, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 344s 2s/step - accuracy: 0.5659 - loss: 13.0474 - val_accuracy: 0.8886 - val_loss: 2.6211 Epoch 2/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:13 2s/step - accuracy: 0.7500 - loss: 4.3496 Epoch 2: val_accuracy did not improve from 0.88859 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.7500 - loss: 4.3496 - val_accuracy: 0.5000 - val_loss: 22.2576 Epoch 3/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.8735 - loss: 3.6052 Epoch 3: val_accuracy improved from 0.88859 to 0.92391, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 341s 2s/step - accuracy: 0.8735 - loss: 3.6050 - val_accuracy: 0.9239 - val_loss: 2.4035 Epoch 4/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:08 2s/step - accuracy: 1.0000 - loss: 0.4120 Epoch 4: val_accuracy did not improve from 0.92391 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 1.0000 - loss: 0.4120 - val_accuracy: 0.0000e+00 - val_loss: 3.3323 Epoch 5/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9347 - loss: 2.1269 Epoch 5: val_accuracy improved from 0.92391 to 0.93207, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 350s 2s/step - accuracy: 0.9347 - loss: 2.1282 - val_accuracy: 0.9321 - val_loss: 2.4189 Epoch 6/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:05 2s/step - accuracy: 1.0000 - loss: 0.4679 Epoch 6: val_accuracy improved from 0.93207 to 1.00000, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 1.0000 - loss: 0.4679 - val_accuracy: 1.0000 - val_loss: 0.4684 Epoch 7/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9403 - loss: 2.0698 Epoch 7: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 342s 2s/step - accuracy: 0.9404 - loss: 2.0689 - val_accuracy: 0.9620 - val_loss: 1.5386 Epoch 8/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:40 2s/step - accuracy: 1.0000 - loss: 0.5220 Epoch 8: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 1.0000 - loss: 0.5220 - val_accuracy: 1.0000 - val_loss: 0.5223 Epoch 9/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9553 - loss: 1.6630 Epoch 9: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 348s 2s/step - accuracy: 0.9553 - loss: 1.6637 - val_accuracy: 0.9538 - val_loss: 2.3739 Epoch 10/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:13 2s/step - accuracy: 1.0000 - loss: 0.5682 Epoch 10: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 1.0000 - loss: 0.5682 - val_accuracy: 1.0000 - val_loss: 0.5686 Epoch 11/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9657 - loss: 1.4156 Epoch 11: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 351s 2s/step - accuracy: 0.9657 - loss: 1.4152 - val_accuracy: 0.9497 - val_loss: 1.6824 Epoch 12/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 1.0000 - loss: 0.6043 Epoch 12: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 1.0000 - loss: 0.6043 - val_accuracy: 1.0000 - val_loss: 0.6046 Epoch 13/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9670 - loss: 1.7097 Epoch 13: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 344s 2s/step - accuracy: 0.9670 - loss: 1.7114 - val_accuracy: 0.9389 - val_loss: 2.7864 Epoch 14/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:08 2s/step - accuracy: 1.0000 - loss: 0.6567 Epoch 14: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 1.0000 - loss: 0.6567 - val_accuracy: 0.5000 - val_loss: 23.2149 Epoch 15/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9665 - loss: 1.5991 Epoch 15: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 342s 2s/step - accuracy: 0.9665 - loss: 1.5990 - val_accuracy: 0.9606 - val_loss: 1.9011 Epoch 16/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:33 2s/step - accuracy: 1.0000 - loss: 0.7121 Epoch 16: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 1.0000 - loss: 0.7121 - val_accuracy: 1.0000 - val_loss: 0.7124 Epoch 16: early stopping Restoring model weights from the end of the best epoch: 6. Tiempo transcurrido para el entrenamiento: 2777.534591436386 segundos Uso de CPU durante el entrenamiento: 40.7% Aumento en uso de memoria: 0.08380508422851562 GB Resultados para lr=0.0005, l2=0.05, batch_size=16: Tiempo: 2777.534591436386 segundos, CPU: 40.7%, Memoria: 0.08380508422851562 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_29"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_14 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_14 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_14 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.4948 - loss: 12.3616 Epoch 1: val_accuracy improved from -inf to 0.89810, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 343s 4s/step - accuracy: 0.4972 - loss: 12.2954 - val_accuracy: 0.8981 - val_loss: 2.1775 Epoch 2/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 3:59 3s/step - accuracy: 0.8125 - loss: 7.0758 Epoch 2: val_accuracy improved from 0.89810 to 1.00000, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 8ms/step - accuracy: 0.8125 - loss: 7.0758 - val_accuracy: 1.0000 - val_loss: 0.3430 Epoch 3/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9089 - loss: 1.9784 Epoch 3: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 339s 4s/step - accuracy: 0.9089 - loss: 1.9796 - val_accuracy: 0.9212 - val_loss: 1.8589 Epoch 4/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 3:59 3s/step - accuracy: 0.9688 - loss: 0.7362 Epoch 4: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 0.9688 - loss: 0.7362 - val_accuracy: 1.0000 - val_loss: 0.3697 Epoch 5/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9381 - loss: 1.4556 Epoch 5: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 337s 4s/step - accuracy: 0.9382 - loss: 1.4561 - val_accuracy: 0.9375 - val_loss: 1.9736 Epoch 6/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:06 3s/step - accuracy: 0.9375 - loss: 0.9267 Epoch 6: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.9375 - loss: 0.9267 - val_accuracy: 1.0000 - val_loss: 0.3942 Epoch 7/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9429 - loss: 1.6580 Epoch 7: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 344s 4s/step - accuracy: 0.9429 - loss: 1.6570 - val_accuracy: 0.9334 - val_loss: 2.3298 Epoch 8/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:08 3s/step - accuracy: 1.0000 - loss: 0.4212 Epoch 8: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 1.0000 - loss: 0.4212 - val_accuracy: 1.0000 - val_loss: 0.4211 Epoch 9/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9507 - loss: 1.5098 Epoch 9: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 339s 4s/step - accuracy: 0.9507 - loss: 1.5091 - val_accuracy: 0.9497 - val_loss: 1.5630 Epoch 10/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:06 3s/step - accuracy: 0.9688 - loss: 1.9052 Epoch 10: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 0.9688 - loss: 1.9052 - val_accuracy: 1.0000 - val_loss: 0.4501 Epoch 11/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9595 - loss: 1.2818 Epoch 11: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 337s 4s/step - accuracy: 0.9595 - loss: 1.2821 - val_accuracy: 0.9524 - val_loss: 2.0817 Epoch 12/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:08 3s/step - accuracy: 0.9688 - loss: 0.5263 Epoch 12: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 0.9688 - loss: 0.5263 - val_accuracy: 0.5000 - val_loss: 4.9764 Epoch 12: early stopping Restoring model weights from the end of the best epoch: 2. Tiempo transcurrido para el entrenamiento: 2059.5964851379395 segundos Uso de CPU durante el entrenamiento: 22.5% Aumento en uso de memoria: 0.366241455078125 GB Resultados para lr=0.0005, l2=0.05, batch_size=32: Tiempo: 2059.5964851379395 segundos, CPU: 22.5%, Memoria: 0.366241455078125 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_31"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_15 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_15 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_15 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4263 - loss: 14.7463 Epoch 1: val_accuracy improved from -inf to 0.87642, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 334s 8s/step - accuracy: 0.4316 - loss: 14.5857 - val_accuracy: 0.8764 - val_loss: 2.3411 Epoch 2/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:09 6s/step - accuracy: 0.8750 - loss: 2.2404 Epoch 2: val_accuracy did not improve from 0.87642 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 86ms/step - accuracy: 0.8750 - loss: 2.2404 - val_accuracy: 0.8529 - val_loss: 3.7646 Epoch 3/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9086 - loss: 1.7937 Epoch 3: val_accuracy improved from 0.87642 to 0.91193, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 335s 8s/step - accuracy: 0.9087 - loss: 1.7930 - val_accuracy: 0.9119 - val_loss: 1.9631 Epoch 4/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:55 6s/step - accuracy: 0.8594 - loss: 3.5218 Epoch 4: val_accuracy improved from 0.91193 to 0.97059, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 92ms/step - accuracy: 0.8594 - loss: 3.5218 - val_accuracy: 0.9706 - val_loss: 0.4579 Epoch 5/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.9438 - loss: 1.3817 Epoch 5: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.9436 - loss: 1.3842 - val_accuracy: 0.9361 - val_loss: 1.4174 Epoch 6/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:08 6s/step - accuracy: 0.9531 - loss: 0.6830 Epoch 6: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 84ms/step - accuracy: 0.9531 - loss: 0.6830 - val_accuracy: 0.9412 - val_loss: 1.0407 Epoch 7/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.9595 - loss: 0.9734 Epoch 7: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.9594 - loss: 0.9744 - val_accuracy: 0.9474 - val_loss: 1.1859 Epoch 8/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:07 6s/step - accuracy: 0.9531 - loss: 0.7902 Epoch 8: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 89ms/step - accuracy: 0.9531 - loss: 0.7902 - val_accuracy: 0.8824 - val_loss: 1.7676 Epoch 9/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9644 - loss: 0.8537 Epoch 9: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 330s 8s/step - accuracy: 0.9644 - loss: 0.8559 - val_accuracy: 0.9318 - val_loss: 1.5113 Epoch 10/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:07 6s/step - accuracy: 0.9375 - loss: 0.8138 Epoch 10: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 83ms/step - accuracy: 0.9375 - loss: 0.8138 - val_accuracy: 0.9706 - val_loss: 1.2075 Epoch 11/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9701 - loss: 0.8206 Epoch 11: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 331s 8s/step - accuracy: 0.9701 - loss: 0.8213 - val_accuracy: 0.9602 - val_loss: 1.1640 Epoch 12/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:56 6s/step - accuracy: 0.9531 - loss: 0.7317 Epoch 12: val_accuracy improved from 0.97059 to 1.00000, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 93ms/step - accuracy: 0.9531 - loss: 0.7317 - val_accuracy: 1.0000 - val_loss: 0.4081 Epoch 13/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9721 - loss: 0.8402 Epoch 13: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 330s 8s/step - accuracy: 0.9722 - loss: 0.8397 - val_accuracy: 0.9631 - val_loss: 1.1382 Epoch 14/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:53 6s/step - accuracy: 0.9688 - loss: 0.8255 Epoch 14: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 96ms/step - accuracy: 0.9688 - loss: 0.8255 - val_accuracy: 0.9412 - val_loss: 1.2925 Epoch 15/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9729 - loss: 0.8347 Epoch 15: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 331s 8s/step - accuracy: 0.9730 - loss: 0.8342 - val_accuracy: 0.9616 - val_loss: 1.2805 Epoch 16/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:06 6s/step - accuracy: 0.9688 - loss: 1.5490 Epoch 16: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 85ms/step - accuracy: 0.9688 - loss: 1.5490 - val_accuracy: 0.9118 - val_loss: 1.5093 Epoch 17/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9682 - loss: 0.9235 Epoch 17: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 330s 8s/step - accuracy: 0.9682 - loss: 0.9233 - val_accuracy: 0.9460 - val_loss: 1.5959 Epoch 18/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:00 6s/step - accuracy: 0.9844 - loss: 0.9789 Epoch 18: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 85ms/step - accuracy: 0.9844 - loss: 0.9789 - val_accuracy: 0.9706 - val_loss: 0.8188 Epoch 19/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9721 - loss: 0.9339 Epoch 19: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 333s 8s/step - accuracy: 0.9721 - loss: 0.9333 - val_accuracy: 0.9531 - val_loss: 1.5359 Epoch 20/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:57 6s/step - accuracy: 0.9844 - loss: 0.6208 Epoch 20: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 87ms/step - accuracy: 0.9844 - loss: 0.6208 - val_accuracy: 0.9706 - val_loss: 0.5574 Epoch 21/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.9766 - loss: 0.7381 Epoch 21: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.9766 - loss: 0.7417 - val_accuracy: 0.9659 - val_loss: 1.1822 Epoch 22/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:01 6s/step - accuracy: 0.9844 - loss: 0.5217 Epoch 22: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 83ms/step - accuracy: 0.9844 - loss: 0.5217 - val_accuracy: 0.9412 - val_loss: 2.8742 Epoch 22: early stopping Restoring model weights from the end of the best epoch: 12. Tiempo transcurrido para el entrenamiento: 3775.056304216385 segundos Uso de CPU durante el entrenamiento: 36.599999999999994% Aumento en uso de memoria: 0.05986785888671875 GB Resultados para lr=0.0005, l2=0.05, batch_size=64: Tiempo: 3775.056304216385 segundos, CPU: 36.599999999999994%, Memoria: 0.05986785888671875 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_33"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_16 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_16 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_16 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5565 - loss: 12.2703 Epoch 1: val_accuracy improved from -inf to 0.88179, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 344s 2s/step - accuracy: 0.5576 - loss: 12.2386 - val_accuracy: 0.8818 - val_loss: 3.1303 Epoch 2/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:07 2s/step - accuracy: 0.9375 - loss: 1.2366 Epoch 2: val_accuracy did not improve from 0.88179 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.9375 - loss: 1.2366 - val_accuracy: 0.5000 - val_loss: 4.4717 Epoch 3/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.8919 - loss: 3.1137 Epoch 3: val_accuracy improved from 0.88179 to 0.89538, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 347s 2s/step - accuracy: 0.8920 - loss: 3.1126 - val_accuracy: 0.8954 - val_loss: 3.1921 Epoch 4/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:08 2s/step - accuracy: 0.8750 - loss: 2.9106 Epoch 4: val_accuracy improved from 0.89538 to 1.00000, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.8750 - loss: 2.9106 - val_accuracy: 1.0000 - val_loss: 0.4032 Epoch 5/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9166 - loss: 2.2375 Epoch 5: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 343s 2s/step - accuracy: 0.9166 - loss: 2.2376 - val_accuracy: 0.9375 - val_loss: 2.1720 Epoch 6/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:12 2s/step - accuracy: 0.8750 - loss: 2.8767 Epoch 6: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.8750 - loss: 2.8767 - val_accuracy: 1.0000 - val_loss: 0.4677 Epoch 7/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9447 - loss: 1.8778 Epoch 7: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 341s 2s/step - accuracy: 0.9447 - loss: 1.8798 - val_accuracy: 0.9212 - val_loss: 3.1020 Epoch 8/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:40 2s/step - accuracy: 0.9375 - loss: 7.1626 Epoch 8: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.9375 - loss: 7.1626 - val_accuracy: 1.0000 - val_loss: 0.5284 Epoch 9/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9475 - loss: 2.1300 Epoch 9: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 341s 2s/step - accuracy: 0.9475 - loss: 2.1295 - val_accuracy: 0.9484 - val_loss: 2.8633 Epoch 10/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:02 1s/step - accuracy: 0.9375 - loss: 1.6619 Epoch 10: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.9375 - loss: 1.6619 - val_accuracy: 1.0000 - val_loss: 0.5913 Epoch 11/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9633 - loss: 1.4654 Epoch 11: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 341s 2s/step - accuracy: 0.9633 - loss: 1.4667 - val_accuracy: 0.9402 - val_loss: 2.9704 Epoch 12/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:15 2s/step - accuracy: 1.0000 - loss: 0.6371 Epoch 12: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 1.0000 - loss: 0.6371 - val_accuracy: 1.0000 - val_loss: 0.6374 Epoch 13/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9705 - loss: 1.5201 Epoch 13: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 350s 2s/step - accuracy: 0.9705 - loss: 1.5198 - val_accuracy: 0.9579 - val_loss: 2.0427 Epoch 14/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:13 2s/step - accuracy: 1.0000 - loss: 0.6763 Epoch 14: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 1.0000 - loss: 0.6763 - val_accuracy: 1.0000 - val_loss: 0.6766 Epoch 14: early stopping Restoring model weights from the end of the best epoch: 4. Tiempo transcurrido para el entrenamiento: 2419.914089679718 segundos Uso de CPU durante el entrenamiento: 26.0% Aumento en uso de memoria: 0.3121757507324219 GB Resultados para lr=0.0005, l2=0.1, batch_size=16: Tiempo: 2419.914089679718 segundos, CPU: 26.0%, Memoria: 0.3121757507324219 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_35"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_17 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_17 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_17 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5139 - loss: 13.9244 Epoch 1: val_accuracy improved from -inf to 0.83152, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 338s 4s/step - accuracy: 0.5162 - loss: 13.8457 - val_accuracy: 0.8315 - val_loss: 3.1930 Epoch 2/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 3:58 3s/step - accuracy: 0.9062 - loss: 1.2108 Epoch 2: val_accuracy improved from 0.83152 to 1.00000, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 8ms/step - accuracy: 0.9062 - loss: 1.2108 - val_accuracy: 1.0000 - val_loss: 0.3445 Epoch 3/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.8895 - loss: 2.4827 Epoch 3: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 339s 4s/step - accuracy: 0.8897 - loss: 2.4774 - val_accuracy: 0.9321 - val_loss: 1.6311 Epoch 4/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:00 3s/step - accuracy: 1.0000 - loss: 0.3721 Epoch 4: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 1.0000 - loss: 0.3721 - val_accuracy: 1.0000 - val_loss: 0.3722 Epoch 5/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9262 - loss: 1.6709 Epoch 5: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 337s 4s/step - accuracy: 0.9262 - loss: 1.6703 - val_accuracy: 0.9185 - val_loss: 1.9659 Epoch 6/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:18 3s/step - accuracy: 0.9688 - loss: 0.9096 Epoch 6: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - accuracy: 0.9688 - loss: 0.9096 - val_accuracy: 1.0000 - val_loss: 0.3994 Epoch 7/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9504 - loss: 1.3136 Epoch 7: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 336s 4s/step - accuracy: 0.9503 - loss: 1.3151 - val_accuracy: 0.9429 - val_loss: 1.7057 Epoch 8/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:03 3s/step - accuracy: 0.9375 - loss: 1.2235 Epoch 8: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.9375 - loss: 1.2235 - val_accuracy: 1.0000 - val_loss: 0.4265 Epoch 9/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9640 - loss: 0.9810 Epoch 9: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 339s 4s/step - accuracy: 0.9639 - loss: 0.9830 - val_accuracy: 0.9198 - val_loss: 2.3332 Epoch 10/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:03 3s/step - accuracy: 0.9688 - loss: 0.5066 Epoch 10: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 0.9688 - loss: 0.5066 - val_accuracy: 1.0000 - val_loss: 0.4471 Epoch 11/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9597 - loss: 1.3801 Epoch 11: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 350s 4s/step - accuracy: 0.9597 - loss: 1.3791 - val_accuracy: 0.9688 - val_loss: 1.4421 Epoch 12/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:03 3s/step - accuracy: 0.9688 - loss: 0.8128 Epoch 12: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 0.9688 - loss: 0.8128 - val_accuracy: 1.0000 - val_loss: 0.4746 Epoch 12: early stopping Restoring model weights from the end of the best epoch: 2. Tiempo transcurrido para el entrenamiento: 2060.525102376938 segundos Uso de CPU durante el entrenamiento: 24.19999999999999% Aumento en uso de memoria: -0.09153366088867188 GB Resultados para lr=0.0005, l2=0.1, batch_size=32: Tiempo: 2060.525102376938 segundos, CPU: 24.19999999999999%, Memoria: -0.09153366088867188 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_37"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_18 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_18 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_18 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4347 - loss: 15.1208 Epoch 1: val_accuracy improved from -inf to 0.88778, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 339s 8s/step - accuracy: 0.4399 - loss: 14.9572 - val_accuracy: 0.8878 - val_loss: 2.3847 Epoch 2/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:56 6s/step - accuracy: 0.8594 - loss: 2.2792 Epoch 2: val_accuracy did not improve from 0.88778 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 84ms/step - accuracy: 0.8594 - loss: 2.2792 - val_accuracy: 0.8824 - val_loss: 1.4323 Epoch 3/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9011 - loss: 2.0955 Epoch 3: val_accuracy improved from 0.88778 to 0.91477, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 333s 8s/step - accuracy: 0.9014 - loss: 2.0898 - val_accuracy: 0.9148 - val_loss: 1.9255 Epoch 4/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:26 7s/step - accuracy: 0.8906 - loss: 2.2663 Epoch 4: val_accuracy improved from 0.91477 to 0.94118, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 92ms/step - accuracy: 0.8906 - loss: 2.2663 - val_accuracy: 0.9412 - val_loss: 3.0590 Epoch 5/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9304 - loss: 1.4533 Epoch 5: val_accuracy improved from 0.94118 to 0.95170, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 331s 8s/step - accuracy: 0.9306 - loss: 1.4521 - val_accuracy: 0.9517 - val_loss: 1.0741 Epoch 6/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:05 6s/step - accuracy: 0.9688 - loss: 0.7803 Epoch 6: val_accuracy improved from 0.95170 to 1.00000, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 92ms/step - accuracy: 0.9688 - loss: 0.7803 - val_accuracy: 1.0000 - val_loss: 0.3911 Epoch 7/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9454 - loss: 1.1237 Epoch 7: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 331s 8s/step - accuracy: 0.9455 - loss: 1.1230 - val_accuracy: 0.9460 - val_loss: 1.4682 Epoch 8/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:59 6s/step - accuracy: 0.8906 - loss: 2.0263 Epoch 8: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 85ms/step - accuracy: 0.8906 - loss: 2.0263 - val_accuracy: 0.9118 - val_loss: 1.9532 Epoch 9/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9650 - loss: 0.8957 Epoch 9: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 330s 8s/step - accuracy: 0.9649 - loss: 0.8969 - val_accuracy: 0.9602 - val_loss: 1.0955 Epoch 10/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:57 6s/step - accuracy: 0.9844 - loss: 0.4297 Epoch 10: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 86ms/step - accuracy: 0.9844 - loss: 0.4297 - val_accuracy: 0.9706 - val_loss: 0.5687 Epoch 11/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9703 - loss: 0.9074 Epoch 11: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 330s 8s/step - accuracy: 0.9702 - loss: 0.9082 - val_accuracy: 0.9588 - val_loss: 0.9954 Epoch 12/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:56 6s/step - accuracy: 0.9531 - loss: 1.5508 Epoch 12: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 96ms/step - accuracy: 0.9531 - loss: 1.5508 - val_accuracy: 0.9706 - val_loss: 1.9621 Epoch 13/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.9695 - loss: 0.8586 Epoch 13: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 340s 8s/step - accuracy: 0.9695 - loss: 0.8594 - val_accuracy: 0.9588 - val_loss: 1.0309 Epoch 14/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:52 6s/step - accuracy: 1.0000 - loss: 0.4111 Epoch 14: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 89ms/step - accuracy: 1.0000 - loss: 0.4111 - val_accuracy: 0.9412 - val_loss: 2.0373 Epoch 15/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9755 - loss: 0.8157 Epoch 15: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 331s 8s/step - accuracy: 0.9755 - loss: 0.8165 - val_accuracy: 0.9588 - val_loss: 1.3062 Epoch 16/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:10 6s/step - accuracy: 0.9688 - loss: 1.0216 Epoch 16: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 89ms/step - accuracy: 0.9688 - loss: 1.0216 - val_accuracy: 0.9412 - val_loss: 2.5329 Epoch 16: early stopping Restoring model weights from the end of the best epoch: 6. Tiempo transcurrido para el entrenamiento: 2745.582318544388 segundos Uso de CPU durante el entrenamiento: 27.099999999999994% Aumento en uso de memoria: -0.22264480590820312 GB Resultados para lr=0.0005, l2=0.1, batch_size=64: Tiempo: 2745.582318544388 segundos, CPU: 27.099999999999994%, Memoria: -0.22264480590820312 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_39"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_19 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_19 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_19 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5745 - loss: 16.4246 Epoch 1: val_accuracy improved from -inf to 0.84647, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 344s 2s/step - accuracy: 0.5755 - loss: 16.3879 - val_accuracy: 0.8465 - val_loss: 6.8197 Epoch 2/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:05 2s/step - accuracy: 0.6250 - loss: 20.9407 Epoch 2: val_accuracy improved from 0.84647 to 1.00000, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.6250 - loss: 20.9407 - val_accuracy: 1.0000 - val_loss: 0.5724 Epoch 3/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9014 - loss: 5.7236 Epoch 3: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 346s 2s/step - accuracy: 0.9014 - loss: 5.7240 - val_accuracy: 0.8967 - val_loss: 6.0108 Epoch 4/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:12 2s/step - accuracy: 1.0000 - loss: 0.8505 Epoch 4: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 1.0000 - loss: 0.8505 - val_accuracy: 1.0000 - val_loss: 0.8528 Epoch 5/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9311 - loss: 3.8975 Epoch 5: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 344s 2s/step - accuracy: 0.9311 - loss: 3.8984 - val_accuracy: 0.9348 - val_loss: 3.6885 Epoch 6/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 1.0000 - loss: 1.0928 Epoch 6: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 1.0000 - loss: 1.0928 - val_accuracy: 1.0000 - val_loss: 1.0945 Epoch 7/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9386 - loss: 4.7044 Epoch 7: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 341s 2s/step - accuracy: 0.9387 - loss: 4.7015 - val_accuracy: 0.9552 - val_loss: 4.0928 Epoch 8/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:08 2s/step - accuracy: 1.0000 - loss: 1.3158 Epoch 8: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 1.0000 - loss: 1.3158 - val_accuracy: 1.0000 - val_loss: 1.3173 Epoch 9/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9545 - loss: 3.6700 Epoch 9: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 341s 2s/step - accuracy: 0.9545 - loss: 3.6697 - val_accuracy: 0.9552 - val_loss: 4.5430 Epoch 10/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:09 2s/step - accuracy: 1.0000 - loss: 1.5116 Epoch 10: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 1.0000 - loss: 1.5116 - val_accuracy: 1.0000 - val_loss: 1.5125 Epoch 11/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9669 - loss: 3.2491 Epoch 11: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 348s 2s/step - accuracy: 0.9669 - loss: 3.2527 - val_accuracy: 0.9538 - val_loss: 5.0933 Epoch 12/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:20 2s/step - accuracy: 0.9375 - loss: 2.7796 Epoch 12: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.9375 - loss: 2.7796 - val_accuracy: 1.0000 - val_loss: 1.6869 Epoch 12: early stopping Restoring model weights from the end of the best epoch: 2. Tiempo transcurrido para el entrenamiento: 2076.202100753784 segundos Uso de CPU durante el entrenamiento: 25.89999999999999% Aumento en uso de memoria: 0.1414337158203125 GB Resultados para lr=0.001, l2=0.01, batch_size=16: Tiempo: 2076.202100753784 segundos, CPU: 25.89999999999999%, Memoria: 0.1414337158203125 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_41"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_20 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_20 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_20 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5123 - loss: 19.5326 Epoch 1: val_accuracy improved from -inf to 0.88723, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 341s 4s/step - accuracy: 0.5147 - loss: 19.4293 - val_accuracy: 0.8872 - val_loss: 4.3339 Epoch 2/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 3:58 3s/step - accuracy: 0.8750 - loss: 4.3965 Epoch 2: val_accuracy improved from 0.88723 to 1.00000, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 8ms/step - accuracy: 0.8750 - loss: 4.3965 - val_accuracy: 1.0000 - val_loss: 0.4681 Epoch 3/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9111 - loss: 3.7682 Epoch 3: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 336s 4s/step - accuracy: 0.9111 - loss: 3.7639 - val_accuracy: 0.9389 - val_loss: 2.3771 Epoch 4/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:07 3s/step - accuracy: 0.9375 - loss: 0.8779 Epoch 4: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.9375 - loss: 0.8779 - val_accuracy: 1.0000 - val_loss: 0.5829 Epoch 5/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9512 - loss: 2.0115 Epoch 5: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 337s 4s/step - accuracy: 0.9511 - loss: 2.0167 - val_accuracy: 0.9293 - val_loss: 3.4126 Epoch 6/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:24 3s/step - accuracy: 0.9688 - loss: 1.5290 Epoch 6: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 3ms/step - accuracy: 0.9688 - loss: 1.5290 - val_accuracy: 1.0000 - val_loss: 0.6739 Epoch 7/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9360 - loss: 3.1303 Epoch 7: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 338s 4s/step - accuracy: 0.9359 - loss: 3.1396 - val_accuracy: 0.9266 - val_loss: 3.4730 Epoch 8/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:01 3s/step - accuracy: 0.9688 - loss: 3.0431 Epoch 8: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.9688 - loss: 3.0431 - val_accuracy: 1.0000 - val_loss: 0.8117 Epoch 9/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9465 - loss: 3.2690 Epoch 9: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 336s 4s/step - accuracy: 0.9466 - loss: 3.2643 - val_accuracy: 0.9470 - val_loss: 3.2240 Epoch 10/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 3:57 3s/step - accuracy: 0.9688 - loss: 3.8612 Epoch 10: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 0.9688 - loss: 3.8612 - val_accuracy: 1.0000 - val_loss: 0.9113 Epoch 11/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9600 - loss: 2.7340 Epoch 11: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 344s 4s/step - accuracy: 0.9600 - loss: 2.7346 - val_accuracy: 0.9538 - val_loss: 3.1917 Epoch 12/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:01 3s/step - accuracy: 0.9375 - loss: 2.2297 Epoch 12: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 0.9375 - loss: 2.2297 - val_accuracy: 1.0000 - val_loss: 0.9982 Epoch 12: early stopping Restoring model weights from the end of the best epoch: 2. Tiempo transcurrido para el entrenamiento: 2053.183504343033 segundos Uso de CPU durante el entrenamiento: 15.399999999999991% Aumento en uso de memoria: -0.08095169067382812 GB Resultados para lr=0.001, l2=0.01, batch_size=32: Tiempo: 2053.183504343033 segundos, CPU: 15.399999999999991%, Memoria: -0.08095169067382812 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_43"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_21 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_21 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_21 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4979 - loss: 16.8666 Epoch 1: val_accuracy improved from -inf to 0.89773, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.5027 - loss: 16.6990 - val_accuracy: 0.8977 - val_loss: 3.1693 Epoch 2/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:03 6s/step - accuracy: 0.9062 - loss: 1.4228 Epoch 2: val_accuracy did not improve from 0.89773 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 89ms/step - accuracy: 0.9062 - loss: 1.4228 - val_accuracy: 0.8529 - val_loss: 1.7117 Epoch 3/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9099 - loss: 2.9109 Epoch 3: val_accuracy improved from 0.89773 to 0.93040, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 334s 8s/step - accuracy: 0.9101 - loss: 2.9086 - val_accuracy: 0.9304 - val_loss: 2.4162 Epoch 4/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:54 6s/step - accuracy: 0.9375 - loss: 1.2634 Epoch 4: val_accuracy did not improve from 0.93040 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 85ms/step - accuracy: 0.9375 - loss: 1.2634 - val_accuracy: 0.9118 - val_loss: 4.5644 Epoch 5/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9481 - loss: 1.9929 Epoch 5: val_accuracy improved from 0.93040 to 0.94318, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 332s 8s/step - accuracy: 0.9481 - loss: 1.9937 - val_accuracy: 0.9432 - val_loss: 1.7826 Epoch 6/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:53 6s/step - accuracy: 0.9531 - loss: 0.9078 Epoch 6: val_accuracy improved from 0.94318 to 0.97059, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 99ms/step - accuracy: 0.9531 - loss: 0.9078 - val_accuracy: 0.9706 - val_loss: 0.8937 Epoch 7/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9522 - loss: 1.7257 Epoch 7: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 331s 8s/step - accuracy: 0.9521 - loss: 1.7305 - val_accuracy: 0.9347 - val_loss: 2.6764 Epoch 8/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:05 6s/step - accuracy: 0.8906 - loss: 2.4168 Epoch 8: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 85ms/step - accuracy: 0.8906 - loss: 2.4168 - val_accuracy: 0.9412 - val_loss: 1.1705 Epoch 9/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9581 - loss: 1.5863 Epoch 9: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 331s 8s/step - accuracy: 0.9581 - loss: 1.5883 - val_accuracy: 0.9503 - val_loss: 2.3738 Epoch 10/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:01 6s/step - accuracy: 0.9688 - loss: 2.5244 Epoch 10: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 83ms/step - accuracy: 0.9688 - loss: 2.5244 - val_accuracy: 0.9706 - val_loss: 1.7108 Epoch 11/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9535 - loss: 2.0779 Epoch 11: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 329s 8s/step - accuracy: 0.9536 - loss: 2.0807 - val_accuracy: 0.9460 - val_loss: 2.8556 Epoch 12/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:53 6s/step - accuracy: 0.9688 - loss: 1.6969 Epoch 12: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 85ms/step - accuracy: 0.9688 - loss: 1.6969 - val_accuracy: 0.9412 - val_loss: 2.5582 Epoch 13/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.9678 - loss: 1.8018 Epoch 13: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.9678 - loss: 1.7993 - val_accuracy: 0.9588 - val_loss: 2.1104 Epoch 14/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:08 6s/step - accuracy: 0.9688 - loss: 1.0288 Epoch 14: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 83ms/step - accuracy: 0.9688 - loss: 1.0288 - val_accuracy: 0.9706 - val_loss: 0.9417 Epoch 15/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9752 - loss: 1.5477 Epoch 15: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 333s 8s/step - accuracy: 0.9751 - loss: 1.5507 - val_accuracy: 0.9688 - val_loss: 1.7642 Epoch 16/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:54 6s/step - accuracy: 0.9688 - loss: 1.2074 Epoch 16: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 86ms/step - accuracy: 0.9688 - loss: 1.2074 - val_accuracy: 0.9706 - val_loss: 0.9391 Epoch 16: early stopping Restoring model weights from the end of the best epoch: 6. Tiempo transcurrido para el entrenamiento: 2743.8381311893463 segundos Uso de CPU durante el entrenamiento: 22.5% Aumento en uso de memoria: 0.08846664428710938 GB Resultados para lr=0.001, l2=0.01, batch_size=64: Tiempo: 2743.8381311893463 segundos, CPU: 22.5%, Memoria: 0.08846664428710938 GB Precisión de validación: 0.970588207244873 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_45"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_22 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_22 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_22 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5710 - loss: 17.9512 Epoch 1: val_accuracy improved from -inf to 0.90489, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 344s 2s/step - accuracy: 0.5720 - loss: 17.9129 - val_accuracy: 0.9049 - val_loss: 4.4843 Epoch 2/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:13 2s/step - accuracy: 0.9375 - loss: 6.4414 Epoch 2: val_accuracy improved from 0.90489 to 1.00000, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.9375 - loss: 6.4414 - val_accuracy: 1.0000 - val_loss: 0.5899 Epoch 3/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.8966 - loss: 4.5973 Epoch 3: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 340s 2s/step - accuracy: 0.8966 - loss: 4.5965 - val_accuracy: 0.9198 - val_loss: 4.8002 Epoch 4/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:33 2s/step - accuracy: 1.0000 - loss: 0.8673 Epoch 4: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 1.0000 - loss: 0.8673 - val_accuracy: 0.5000 - val_loss: 4.4055 Epoch 5/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9195 - loss: 5.4913 Epoch 5: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 348s 2s/step - accuracy: 0.9195 - loss: 5.4941 - val_accuracy: 0.9348 - val_loss: 5.0406 Epoch 6/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:08 2s/step - accuracy: 0.9375 - loss: 5.2354 Epoch 6: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.9375 - loss: 5.2354 - val_accuracy: 1.0000 - val_loss: 1.1377 Epoch 7/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9428 - loss: 4.1708 Epoch 7: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 343s 2s/step - accuracy: 0.9428 - loss: 4.1697 - val_accuracy: 0.9416 - val_loss: 5.2362 Epoch 8/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 3:19 1s/step - accuracy: 1.0000 - loss: 1.3401 Epoch 8: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 1.0000 - loss: 1.3401 - val_accuracy: 1.0000 - val_loss: 1.3416 Epoch 9/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9442 - loss: 5.0295 Epoch 9: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 352s 2s/step - accuracy: 0.9442 - loss: 5.0272 - val_accuracy: 0.9389 - val_loss: 5.0842 Epoch 10/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:02 1s/step - accuracy: 0.9375 - loss: 2.7231 Epoch 10: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.9375 - loss: 2.7231 - val_accuracy: 1.0000 - val_loss: 1.5718 Epoch 11/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9594 - loss: 4.1690 Epoch 11: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 343s 2s/step - accuracy: 0.9595 - loss: 4.1675 - val_accuracy: 0.9389 - val_loss: 4.8885 Epoch 12/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:18 2s/step - accuracy: 0.9375 - loss: 3.0526 Epoch 12: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.9375 - loss: 3.0526 - val_accuracy: 1.0000 - val_loss: 1.7289 Epoch 12: early stopping Restoring model weights from the end of the best epoch: 2. Tiempo transcurrido para el entrenamiento: 2081.577974796295 segundos Uso de CPU durante el entrenamiento: 23.900000000000006% Aumento en uso de memoria: -0.043514251708984375 GB Resultados para lr=0.001, l2=0.05, batch_size=16: Tiempo: 2081.577974796295 segundos, CPU: 23.900000000000006%, Memoria: -0.043514251708984375 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_47"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_23 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_23 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_23 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5571 - loss: 16.3684 Epoch 1: val_accuracy improved from -inf to 0.87364, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 350s 4s/step - accuracy: 0.5593 - loss: 16.2828 - val_accuracy: 0.8736 - val_loss: 3.9299 Epoch 2/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:35 3s/step - accuracy: 0.9375 - loss: 4.3357 Epoch 2: val_accuracy improved from 0.87364 to 1.00000, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 8ms/step - accuracy: 0.9375 - loss: 4.3357 - val_accuracy: 1.0000 - val_loss: 0.4603 Epoch 3/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9114 - loss: 2.9168 Epoch 3: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 344s 4s/step - accuracy: 0.9115 - loss: 2.9171 - val_accuracy: 0.9280 - val_loss: 3.1590 Epoch 4/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:09 3s/step - accuracy: 0.9375 - loss: 0.8917 Epoch 4: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 0.9375 - loss: 0.8917 - val_accuracy: 1.0000 - val_loss: 0.5815 Epoch 5/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9343 - loss: 2.5511 Epoch 5: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 338s 4s/step - accuracy: 0.9343 - loss: 2.5537 - val_accuracy: 0.9253 - val_loss: 3.7120 Epoch 6/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:18 3s/step - accuracy: 0.9375 - loss: 2.5516 Epoch 6: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 0.9375 - loss: 2.5516 - val_accuracy: 1.0000 - val_loss: 0.7004 Epoch 7/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9431 - loss: 2.3326 Epoch 7: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 340s 4s/step - accuracy: 0.9431 - loss: 2.3382 - val_accuracy: 0.9321 - val_loss: 4.0687 Epoch 8/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 3:58 3s/step - accuracy: 0.9062 - loss: 3.6672 Epoch 8: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 0.9062 - loss: 3.6672 - val_accuracy: 1.0000 - val_loss: 0.8273 Epoch 9/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9539 - loss: 2.7641 Epoch 9: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 337s 4s/step - accuracy: 0.9539 - loss: 2.7622 - val_accuracy: 0.9484 - val_loss: 2.4021 Epoch 10/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:17 3s/step - accuracy: 0.9688 - loss: 1.6910 Epoch 10: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - accuracy: 0.9688 - loss: 1.6910 - val_accuracy: 1.0000 - val_loss: 0.9394 Epoch 11/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9681 - loss: 2.0754 Epoch 11: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 345s 4s/step - accuracy: 0.9680 - loss: 2.0796 - val_accuracy: 0.9538 - val_loss: 3.1890 Epoch 12/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 3:59 3s/step - accuracy: 1.0000 - loss: 1.0324 Epoch 12: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 1.0000 - loss: 1.0324 - val_accuracy: 1.0000 - val_loss: 1.0450 Epoch 12: early stopping Restoring model weights from the end of the best epoch: 2. Tiempo transcurrido para el entrenamiento: 2075.415571451187 segundos Uso de CPU durante el entrenamiento: 18.80000000000001% Aumento en uso de memoria: -0.100433349609375 GB Resultados para lr=0.001, l2=0.05, batch_size=32: Tiempo: 2075.415571451187 segundos, CPU: 18.80000000000001%, Memoria: -0.100433349609375 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_49"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_24 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_24 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_24 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5064 - loss: 16.5895 Epoch 1: val_accuracy improved from -inf to 0.88778, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 334s 8s/step - accuracy: 0.5113 - loss: 16.4095 - val_accuracy: 0.8878 - val_loss: 3.9406 Epoch 2/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:09 6s/step - accuracy: 0.8438 - loss: 9.4449 Epoch 2: val_accuracy improved from 0.88778 to 0.91176, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 97ms/step - accuracy: 0.8438 - loss: 9.4449 - val_accuracy: 0.9118 - val_loss: 4.7573 Epoch 3/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9107 - loss: 2.7772 Epoch 3: val_accuracy improved from 0.91176 to 0.96165, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.9110 - loss: 2.7715 - val_accuracy: 0.9616 - val_loss: 1.8672 Epoch 4/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:57 6s/step - accuracy: 0.9688 - loss: 1.4099 Epoch 4: val_accuracy did not improve from 0.96165 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 89ms/step - accuracy: 0.9688 - loss: 1.4099 - val_accuracy: 0.9412 - val_loss: 2.3316 Epoch 5/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9456 - loss: 1.8533 Epoch 5: val_accuracy did not improve from 0.96165 40/40 ━━━━━━━━━━━━━━━━━━━━ 334s 8s/step - accuracy: 0.9456 - loss: 1.8573 - val_accuracy: 0.9389 - val_loss: 1.9441 Epoch 6/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:11 6s/step - accuracy: 0.9844 - loss: 0.7252 Epoch 6: val_accuracy did not improve from 0.96165 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 86ms/step - accuracy: 0.9844 - loss: 0.7252 - val_accuracy: 0.9118 - val_loss: 3.1906 Epoch 7/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9532 - loss: 1.8113 Epoch 7: val_accuracy did not improve from 0.96165 40/40 ━━━━━━━━━━━━━━━━━━━━ 330s 8s/step - accuracy: 0.9532 - loss: 1.8120 - val_accuracy: 0.9162 - val_loss: 4.0766 Epoch 8/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:53 6s/step - accuracy: 0.9062 - loss: 3.9573 Epoch 8: val_accuracy did not improve from 0.96165 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 85ms/step - accuracy: 0.9062 - loss: 3.9573 - val_accuracy: 0.9118 - val_loss: 4.7475 Epoch 9/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9598 - loss: 1.6696 Epoch 9: val_accuracy did not improve from 0.96165 40/40 ━━━━━━━━━━━━━━━━━━━━ 330s 8s/step - accuracy: 0.9598 - loss: 1.6709 - val_accuracy: 0.9361 - val_loss: 2.7426 Epoch 10/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:54 6s/step - accuracy: 0.9531 - loss: 2.1353 Epoch 10: val_accuracy did not improve from 0.96165 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 83ms/step - accuracy: 0.9531 - loss: 2.1353 - val_accuracy: 0.8824 - val_loss: 3.9179 Epoch 11/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9581 - loss: 1.9871 Epoch 11: val_accuracy did not improve from 0.96165 40/40 ━━━━━━━━━━━━━━━━━━━━ 331s 8s/step - accuracy: 0.9581 - loss: 1.9825 - val_accuracy: 0.9545 - val_loss: 2.4406 Epoch 12/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:56 6s/step - accuracy: 0.9688 - loss: 1.6186 Epoch 12: val_accuracy improved from 0.96165 to 0.97059, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 104ms/step - accuracy: 0.9688 - loss: 1.6186 - val_accuracy: 0.9706 - val_loss: 0.7052 Epoch 13/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.9712 - loss: 1.7417 Epoch 13: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.9712 - loss: 1.7408 - val_accuracy: 0.9361 - val_loss: 3.1923 Epoch 14/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:56 6s/step - accuracy: 0.9844 - loss: 0.8070 Epoch 14: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 85ms/step - accuracy: 0.9844 - loss: 0.8070 - val_accuracy: 0.9412 - val_loss: 6.5906 Epoch 15/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9569 - loss: 2.2819 Epoch 15: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 331s 8s/step - accuracy: 0.9569 - loss: 2.2812 - val_accuracy: 0.9560 - val_loss: 2.8808 Epoch 16/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:57 6s/step - accuracy: 0.9688 - loss: 2.0971 Epoch 16: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 91ms/step - accuracy: 0.9688 - loss: 2.0971 - val_accuracy: 0.9118 - val_loss: 4.8347 Epoch 17/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9567 - loss: 2.1789 Epoch 17: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 331s 8s/step - accuracy: 0.9568 - loss: 2.1733 - val_accuracy: 0.9645 - val_loss: 2.3180 Epoch 18/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:09 6s/step - accuracy: 0.9688 - loss: 1.7391 Epoch 18: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 84ms/step - accuracy: 0.9688 - loss: 1.7391 - val_accuracy: 0.9412 - val_loss: 2.0214 Epoch 19/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9741 - loss: 1.6501 Epoch 19: val_accuracy improved from 0.97059 to 0.97301, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 332s 8s/step - accuracy: 0.9741 - loss: 1.6510 - val_accuracy: 0.9730 - val_loss: 1.8010 Epoch 20/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:55 6s/step - accuracy: 1.0000 - loss: 0.8907 Epoch 20: val_accuracy improved from 0.97301 to 1.00000, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 92ms/step - accuracy: 1.0000 - loss: 0.8907 - val_accuracy: 1.0000 - val_loss: 0.8898 Epoch 21/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9810 - loss: 1.3003 Epoch 21: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 330s 8s/step - accuracy: 0.9810 - loss: 1.3031 - val_accuracy: 0.9616 - val_loss: 2.9414 Epoch 22/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:53 6s/step - accuracy: 1.0000 - loss: 0.9141 Epoch 22: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 86ms/step - accuracy: 1.0000 - loss: 0.9141 - val_accuracy: 1.0000 - val_loss: 0.9146 Epoch 23/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9790 - loss: 1.6879 Epoch 23: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 333s 8s/step - accuracy: 0.9789 - loss: 1.6890 - val_accuracy: 0.9602 - val_loss: 2.6449 Epoch 24/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:09 6s/step - accuracy: 0.9844 - loss: 1.1336 Epoch 24: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 84ms/step - accuracy: 0.9844 - loss: 1.1336 - val_accuracy: 0.9706 - val_loss: 3.6424 Epoch 25/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.9828 - loss: 1.4070 Epoch 25: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 339s 8s/step - accuracy: 0.9827 - loss: 1.4093 - val_accuracy: 0.9616 - val_loss: 2.6383 Epoch 26/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:58 6s/step - accuracy: 0.9375 - loss: 2.7656 Epoch 26: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 94ms/step - accuracy: 0.9375 - loss: 2.7656 - val_accuracy: 0.9412 - val_loss: 11.0853 Epoch 27/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9825 - loss: 1.5162 Epoch 27: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 333s 8s/step - accuracy: 0.9825 - loss: 1.5164 - val_accuracy: 0.9702 - val_loss: 2.1395 Epoch 28/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:07 6s/step - accuracy: 0.9844 - loss: 1.4639 Epoch 28: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 86ms/step - accuracy: 0.9844 - loss: 1.4639 - val_accuracy: 0.9706 - val_loss: 1.4039 Epoch 29/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9875 - loss: 1.3647 Epoch 29: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 330s 8s/step - accuracy: 0.9875 - loss: 1.3696 - val_accuracy: 0.9631 - val_loss: 3.2042 Epoch 30/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:55 6s/step - accuracy: 0.9531 - loss: 3.2836 Epoch 30: val_accuracy did not improve from 1.00000 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 83ms/step - accuracy: 0.9531 - loss: 3.2836 - val_accuracy: 0.9412 - val_loss: 3.3419 Epoch 30: early stopping Restoring model weights from the end of the best epoch: 20. Tiempo transcurrido para el entrenamiento: 5138.4053819179535 segundos Uso de CPU durante el entrenamiento: 25.599999999999994% Aumento en uso de memoria: 0.0103302001953125 GB Resultados para lr=0.001, l2=0.05, batch_size=64: Tiempo: 5138.4053819179535 segundos, CPU: 25.599999999999994%, Memoria: 0.0103302001953125 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_51"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_25 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_25 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_25 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5630 - loss: 20.5819 Epoch 1: val_accuracy improved from -inf to 0.89674, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 348s 2s/step - accuracy: 0.5641 - loss: 20.5299 - val_accuracy: 0.8967 - val_loss: 4.6448 Epoch 2/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:12 2s/step - accuracy: 0.7500 - loss: 7.4270 Epoch 2: val_accuracy did not improve from 0.89674 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.7500 - loss: 7.4270 - val_accuracy: 0.5000 - val_loss: 1.8399 Epoch 3/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9080 - loss: 4.5251 Epoch 3: val_accuracy improved from 0.89674 to 0.91712, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 343s 2s/step - accuracy: 0.9080 - loss: 4.5274 - val_accuracy: 0.9171 - val_loss: 4.5629 Epoch 4/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:08 2s/step - accuracy: 0.9375 - loss: 6.8902 Epoch 4: val_accuracy improved from 0.91712 to 1.00000, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 4ms/step - accuracy: 0.9375 - loss: 6.8902 - val_accuracy: 1.0000 - val_loss: 0.8191 Epoch 5/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9271 - loss: 4.4857 Epoch 5: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 341s 2s/step - accuracy: 0.9271 - loss: 4.4860 - val_accuracy: 0.9389 - val_loss: 4.0922 Epoch 6/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:13 2s/step - accuracy: 1.0000 - loss: 1.0671 Epoch 6: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 1.0000 - loss: 1.0671 - val_accuracy: 1.0000 - val_loss: 1.0684 Epoch 7/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9443 - loss: 4.5494 Epoch 7: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 341s 2s/step - accuracy: 0.9442 - loss: 4.5488 - val_accuracy: 0.9361 - val_loss: 4.7699 Epoch 8/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.8125 - loss: 11.4939 Epoch 8: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.8125 - loss: 11.4939 - val_accuracy: 1.0000 - val_loss: 1.2913 Epoch 9/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9426 - loss: 4.9243 Epoch 9: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 350s 2s/step - accuracy: 0.9426 - loss: 4.9253 - val_accuracy: 0.9321 - val_loss: 5.6842 Epoch 10/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:02 1s/step - accuracy: 1.0000 - loss: 1.5480 Epoch 10: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 1.0000 - loss: 1.5480 - val_accuracy: 1.0000 - val_loss: 1.5494 Epoch 11/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9523 - loss: 4.2186 Epoch 11: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 340s 2s/step - accuracy: 0.9523 - loss: 4.2199 - val_accuracy: 0.9497 - val_loss: 5.8332 Epoch 12/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:09 2s/step - accuracy: 1.0000 - loss: 1.7584 Epoch 12: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 1.0000 - loss: 1.7584 - val_accuracy: 1.0000 - val_loss: 1.7594 Epoch 13/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.9650 - loss: 4.1459 Epoch 13: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 345s 2s/step - accuracy: 0.9650 - loss: 4.1464 - val_accuracy: 0.9334 - val_loss: 6.2547 Epoch 14/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:15 2s/step - accuracy: 0.9375 - loss: 10.2722 Epoch 14: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.9375 - loss: 10.2722 - val_accuracy: 1.0000 - val_loss: 1.9231 Epoch 14: early stopping Restoring model weights from the end of the best epoch: 4. Tiempo transcurrido para el entrenamiento: 2421.348974943161 segundos Uso de CPU durante el entrenamiento: 25.599999999999994% Aumento en uso de memoria: 0.30681610107421875 GB Resultados para lr=0.001, l2=0.1, batch_size=16: Tiempo: 2421.348974943161 segundos, CPU: 25.599999999999994%, Memoria: 0.30681610107421875 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_53"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_26 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_26 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_26 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5339 - loss: 19.7976 Epoch 1: val_accuracy improved from -inf to 0.89946, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 345s 4s/step - accuracy: 0.5362 - loss: 19.6943 - val_accuracy: 0.8995 - val_loss: 3.6957 Epoch 2/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:23 3s/step - accuracy: 0.8750 - loss: 3.5391 Epoch 2: val_accuracy improved from 0.89946 to 1.00000, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 9ms/step - accuracy: 0.8750 - loss: 3.5391 - val_accuracy: 1.0000 - val_loss: 0.4663 Epoch 3/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9058 - loss: 3.7751 Epoch 3: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 339s 4s/step - accuracy: 0.9058 - loss: 3.7760 - val_accuracy: 0.9293 - val_loss: 3.4588 Epoch 4/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:01 3s/step - accuracy: 0.9688 - loss: 1.5962 Epoch 4: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.9688 - loss: 1.5962 - val_accuracy: 1.0000 - val_loss: 0.5867 Epoch 5/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9281 - loss: 3.3893 Epoch 5: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 338s 4s/step - accuracy: 0.9280 - loss: 3.3929 - val_accuracy: 0.9266 - val_loss: 3.8165 Epoch 6/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:03 3s/step - accuracy: 0.9062 - loss: 4.0146 Epoch 6: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 0.9062 - loss: 4.0146 - val_accuracy: 1.0000 - val_loss: 0.8362 Epoch 7/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9403 - loss: 3.2047 Epoch 7: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 336s 4s/step - accuracy: 0.9403 - loss: 3.2016 - val_accuracy: 0.9511 - val_loss: 2.7556 Epoch 8/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:26 3s/step - accuracy: 0.9688 - loss: 1.1360 Epoch 8: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - accuracy: 0.9688 - loss: 1.1360 - val_accuracy: 1.0000 - val_loss: 0.8326 Epoch 9/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9575 - loss: 2.6649 Epoch 9: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 343s 4s/step - accuracy: 0.9576 - loss: 2.6622 - val_accuracy: 0.9524 - val_loss: 2.6923 Epoch 10/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 3:58 3s/step - accuracy: 0.8750 - loss: 3.9046 Epoch 10: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.8750 - loss: 3.9046 - val_accuracy: 1.0000 - val_loss: 0.9143 Epoch 11/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.9702 - loss: 1.9934 Epoch 11: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 338s 4s/step - accuracy: 0.9702 - loss: 1.9962 - val_accuracy: 0.9538 - val_loss: 3.2474 Epoch 12/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:22 3s/step - accuracy: 0.9375 - loss: 4.3195 Epoch 12: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - accuracy: 0.9375 - loss: 4.3195 - val_accuracy: 1.0000 - val_loss: 0.9902 Epoch 12: early stopping Restoring model weights from the end of the best epoch: 2. Tiempo transcurrido para el entrenamiento: 2061.667219400406 segundos Uso de CPU durante el entrenamiento: 25.39999999999999% Aumento en uso de memoria: -0.2578392028808594 GB Resultados para lr=0.001, l2=0.1, batch_size=32: Tiempo: 2061.667219400406 segundos, CPU: 25.39999999999999%, Memoria: -0.2578392028808594 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_55"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_27 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_27 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_27 (Dense) │ (None, 18) │ 451,602 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,166,290 (57.85 MB)
Trainable params: 451,602 (1.72 MB)
Non-trainable params: 14,714,688 (56.13 MB)
Epoch 1/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4760 - loss: 23.8683 Epoch 1: val_accuracy improved from -inf to 0.89915, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 339s 8s/step - accuracy: 0.4811 - loss: 23.6032 - val_accuracy: 0.8991 - val_loss: 2.8106 Epoch 2/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:56 6s/step - accuracy: 0.9062 - loss: 2.6127 Epoch 2: val_accuracy improved from 0.89915 to 0.91176, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 100ms/step - accuracy: 0.9062 - loss: 2.6127 - val_accuracy: 0.9118 - val_loss: 1.7004 Epoch 3/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9219 - loss: 2.2101 Epoch 3: val_accuracy improved from 0.91176 to 0.92188, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 332s 8s/step - accuracy: 0.9219 - loss: 2.2157 - val_accuracy: 0.9219 - val_loss: 2.8222 Epoch 4/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:02 6s/step - accuracy: 0.8906 - loss: 1.8345 Epoch 4: val_accuracy did not improve from 0.92188 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 86ms/step - accuracy: 0.8906 - loss: 1.8345 - val_accuracy: 0.8824 - val_loss: 2.7156 Epoch 5/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9514 - loss: 2.1202 Epoch 5: val_accuracy improved from 0.92188 to 0.94460, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 331s 8s/step - accuracy: 0.9514 - loss: 2.1207 - val_accuracy: 0.9446 - val_loss: 1.9858 Epoch 6/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:58 6s/step - accuracy: 0.9375 - loss: 3.1698 Epoch 6: val_accuracy improved from 0.94460 to 0.97059, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 92ms/step - accuracy: 0.9375 - loss: 3.1698 - val_accuracy: 0.9706 - val_loss: 1.2765 Epoch 7/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9494 - loss: 1.5823 Epoch 7: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 330s 8s/step - accuracy: 0.9493 - loss: 1.5861 - val_accuracy: 0.9432 - val_loss: 2.5169 Epoch 8/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:04 6s/step - accuracy: 0.9531 - loss: 2.0113 Epoch 8: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 85ms/step - accuracy: 0.9531 - loss: 2.0113 - val_accuracy: 0.9412 - val_loss: 0.7040 Epoch 9/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.9531 - loss: 1.8427 Epoch 9: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 342s 8s/step - accuracy: 0.9532 - loss: 1.8446 - val_accuracy: 0.9261 - val_loss: 2.9411 Epoch 10/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:56 6s/step - accuracy: 0.9062 - loss: 2.0855 Epoch 10: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 84ms/step - accuracy: 0.9062 - loss: 2.0855 - val_accuracy: 0.9412 - val_loss: 1.7005 Epoch 11/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9518 - loss: 2.4618 Epoch 11: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 332s 8s/step - accuracy: 0.9520 - loss: 2.4522 - val_accuracy: 0.9560 - val_loss: 2.4249 Epoch 12/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:55 6s/step - accuracy: 0.9531 - loss: 1.4318 Epoch 12: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 94ms/step - accuracy: 0.9531 - loss: 1.4318 - val_accuracy: 0.9118 - val_loss: 3.0823 Epoch 13/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9598 - loss: 1.7918 Epoch 13: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 332s 8s/step - accuracy: 0.9598 - loss: 1.7926 - val_accuracy: 0.9460 - val_loss: 2.2385 Epoch 14/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:59 6s/step - accuracy: 0.9844 - loss: 1.5679 Epoch 14: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 85ms/step - accuracy: 0.9844 - loss: 1.5679 - val_accuracy: 0.9412 - val_loss: 1.7983 Epoch 15/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.9692 - loss: 1.7073 Epoch 15: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 335s 8s/step - accuracy: 0.9692 - loss: 1.7082 - val_accuracy: 0.9389 - val_loss: 2.9604 Epoch 16/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:01 6s/step - accuracy: 1.0000 - loss: 0.7609 Epoch 16: val_accuracy did not improve from 0.97059 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 90ms/step - accuracy: 1.0000 - loss: 0.7609 - val_accuracy: 0.9706 - val_loss: 1.3182 Epoch 16: early stopping Restoring model weights from the end of the best epoch: 6. Tiempo transcurrido para el entrenamiento: 2751.2750413417816 segundos Uso de CPU durante el entrenamiento: 22.799999999999997% Aumento en uso de memoria: -0.3160667419433594 GB Resultados para lr=0.001, l2=0.1, batch_size=64: Tiempo: 2751.2750413417816 segundos, CPU: 22.799999999999997%, Memoria: -0.3160667419433594 GB Precisión de validación: 0.970588207244873 Mejores hiperparámetros encontrados: {'learning_rate': 0.0001, 'l2_regularization': 0.01, 'batch_size': 16, 'val_accuracy': 1.0, 'elapsed_time': 2095.2359404563904, 'cpu_usage': 69.80000000000001, 'memory_usage': 502353920}
Explicación de la implementación: Early Stopping:
Se añade el callback EarlyStopping con patience configurado para detener el entrenamiento si no hay mejora en la precisión de validación después de 10 épocas. restore_best_weights=True asegura que el modelo retorne a los pesos de la época con la mejor precisión de validación. ModelCheckpoint:
El callback ModelCheckpoint guarda el mejor modelo basado en la precisión de validación. Seguimiento de métricas:
Se calculan y se imprimen las métricas de tiempo, uso de CPU y memoria para cada combinación de hiperparámetros. Con esta configuración, el entrenamiento se detendrá automáticamente cuando el modelo deje de mejorar, y el mejor modelo encontrado durante la búsqueda de hiperparámetros será guardado.
graficas ultimo modelo¶
import matplotlib.pyplot as plt
def plotTraining(hist, typeData):
if typeData == "loss":
# Obteniendo los valores de la pérdida
yc = hist.history['loss']
epochs = len(yc) # Actualizamos el número de épocas con la longitud real de yc
xc = range(epochs)
plt.figure(figsize=(10, 5))
plt.plot(xc, yc, '-r', label='Pérdida del entrenamiento')
plt.title('Pérdida del entrenamiento por época')
plt.xlabel('Épocas', fontsize=14)
plt.ylabel('Pérdida', fontsize=14)
plt.legend()
plt.grid()
plt.show()
elif typeData == "accuracy":
# Obteniendo los valores de la precisión
yc = hist.history['accuracy']
epochs = len(yc) # Actualizamos el número de épocas con la longitud real de yc
xc = range(epochs)
plt.figure(figsize=(10, 5))
plt.plot(xc, yc, '-b', label='Precisión del entrenamiento')
plt.title('Precisión del entrenamiento por época')
plt.xlabel('Épocas', fontsize=14)
plt.ylabel('Precisión', fontsize=14)
plt.legend()
plt.grid()
plt.show()
elif typeData == "val_loss":
# Obteniendo los valores de la pérdida de validación
yc = hist.history['val_loss']
epochs = len(yc) # Actualizamos el número de épocas con la longitud real de yc
xc = range(epochs)
plt.figure(figsize=(10, 5))
plt.plot(xc, yc, '-r', label='Pérdida de validación')
plt.title('Pérdida de validación por época')
plt.xlabel('Épocas', fontsize=14)
plt.ylabel('Pérdida', fontsize=14)
plt.legend()
plt.grid()
plt.show()
elif typeData == "val_accuracy":
# Obteniendo los valores de la precisión de validación
yc = hist.history['val_accuracy']
epochs = len(yc) # Actualizamos el número de épocas con la longitud real de yc
xc = range(epochs)
plt.figure(figsize=(10, 5))
plt.plot(xc, yc, '-b', label='Precisión de validación')
plt.title('Precisión de validación por época')
plt.xlabel('Épocas', fontsize=14)
plt.ylabel('Precisión', fontsize=14)
plt.legend()
plt.grid()
plt.show()
else:
raise ValueError(f"Tipo de dato desconocido: {typeData}. Use 'loss', 'accuracy', 'val_loss' o 'val_accuracy'.")
# Uso de la función para graficar las diferentes métricas
plotTraining(model_history, "loss")
plotTraining(model_history, "accuracy")
plotTraining(model_history, "val_loss")
plotTraining(model_history, "val_accuracy")
import matplotlib.pyplot as plt
def plotTraining(hist):
epochs = range(len(hist.history['loss']))
plt.figure(figsize=(12, 10))
# Gráfico de la pérdida
plt.subplot(2, 1, 1)
plt.plot(epochs, hist.history['loss'], '-r', label='Pérdida del entrenamiento')
plt.plot(epochs, hist.history['val_loss'], '-b', label='Pérdida de validación')
plt.title('Pérdida del entrenamiento y validación')
plt.xlabel('Épocas', fontsize=14)
plt.ylabel('Pérdida', fontsize=14)
plt.legend()
plt.grid()
# Gráfico de la precisión
plt.subplot(2, 1, 2)
plt.plot(epochs, hist.history['accuracy'], '-r', label='Precisión del entrenamiento')
plt.plot(epochs, hist.history['val_accuracy'], '-b', label='Precisión de validación')
plt.title('Precisión del entrenamiento y validación')
plt.xlabel('Épocas', fontsize=14)
plt.ylabel('Precisión', fontsize=14)
plt.legend()
plt.grid()
plt.tight_layout()
plt.show()
# Uso de la función para graficar
plotTraining(model_history)
from sklearn.metrics import confusion_matrix, f1_score, roc_curve, precision_score, recall_score, accuracy_score, roc_auc_score
from sklearn import metrics
from mlxtend.plotting import plot_confusion_matrix
from keras.models import load_model
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
names = [ 'CATHARTES AURA', 'COEREBA FLAVEOLA', 'COLUMBA LIVIA', 'CORAGYPS ATRATUS', 'CROTOPHAGA SULCIROSTRIS', 'CYANOCORAX YNCAS', 'EGRETTA THULA', 'FALCO PEREGRINUS', 'FALCO SPARVERIUS', 'HIRUNDO RUSTICA', 'PANDION HALIAETUS', 'PILHERODIUS PILEATUS', 'PITANGUS SULPHURATUS', 'PYRRHOMYIAS CINNAMOMEUS', 'RYNCHOPS NIGER', 'SETOPHAGA FUSCA', 'SYNALLAXIS AZARAE', 'TYRANNUS MELANCHOLICUS']
test_data_dir = 'datasetpreprocesado/test'
test_datagen = ImageDataGenerator()
test_generator = test_datagen.flow_from_directory(
test_data_dir,
target_size=(width_shape, height_shape),
batch_size = batch_size,
class_mode='categorical',
shuffle=False)
custom_Model50= load_model('best_model.keras')
predictions = custom_Model50.predict(test_generator)
y_pred = np.argmax(predictions, axis=1)
y_real = test_generator.classes
matc=confusion_matrix(y_real, y_pred)
plot_confusion_matrix(conf_mat=matc, figsize=(9,9), class_names = names, show_normed=False)
plt.tight_layout()
print(metrics.classification_report(y_real,y_pred, digits = 4))
Found 361 images belonging to 18 classes.
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\keras\src\trainers\data_adapters\py_dataset_adapter.py:120: UserWarning: Your `PyDataset` class should call `super().__init__(**kwargs)` in its constructor. `**kwargs` can include `workers`, `use_multiprocessing`, `max_queue_size`. Do not pass these arguments to `fit()`, as they will be ignored. self._warn_if_super_not_called()
6/6 ━━━━━━━━━━━━━━━━━━━━ 43s 6s/step precision recall f1-score support 0 1.0000 0.5500 0.7097 20 1 0.7778 0.3500 0.4828 20 2 0.8696 1.0000 0.9302 20 3 0.7083 0.8500 0.7727 20 4 0.5714 0.8000 0.6667 20 5 0.3830 0.9000 0.5373 20 6 0.6897 1.0000 0.8163 20 7 0.9375 0.7500 0.8333 20 8 0.9500 0.9500 0.9500 20 9 0.6250 1.0000 0.7692 20 10 0.7500 0.9000 0.8182 20 11 0.8571 0.6000 0.7059 20 12 0.8000 0.4000 0.5333 20 13 0.8000 0.2000 0.3200 20 14 0.9000 0.9000 0.9000 20 15 1.0000 0.5500 0.7097 20 16 0.8462 0.5500 0.6667 20 17 0.4800 0.5714 0.5217 21 accuracy 0.7119 361 macro avg 0.7748 0.7123 0.7024 361 weighted avg 0.7739 0.7119 0.7019 361
Prueba 7¶
import time
import psutil
import numpy as np
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense, Flatten
from tensorflow.keras.applications.vgg16 import VGG16, preprocess_input
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# Definir el número de muestras de entrenamiento y validación
nb_train_samples = 2621
nb_validation_samples = 738
# Definir el número de épocas
epochs = 50
# Definir el tamaño de las imágenes
width_shape = 224
height_shape = 224
# Definir el número de clases
num_classes = 18 # Ajustar según el número de clases en tu dataset
# Directorios de datos de entrenamiento y validación
train_data_dir = 'datasetpreprocesado/train'
validation_data_dir = 'datasetpreprocesado/valid'
# Función para crear y entrenar el modelo
def create_and_train_vgg16_model(learning_rate, l2_regularization, batch_size):
# Crear generadores de datos con el batch_size proporcionado
train_datagen = ImageDataGenerator(
rotation_range=20,
zoom_range=0.2,
width_shift_range=0.1,
height_shift_range=0.1,
horizontal_flip=True,
vertical_flip=False,
preprocessing_function=preprocess_input
)
valid_datagen = ImageDataGenerator(
rotation_range=20,
zoom_range=0.2,
width_shift_range=0.1,
height_shift_range=0.1,
horizontal_flip=True,
vertical_flip=False,
preprocessing_function=preprocess_input
)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(width_shape, height_shape),
batch_size=batch_size,
class_mode='categorical'
)
validation_generator = valid_datagen.flow_from_directory(
validation_data_dir,
target_size=(width_shape, height_shape),
batch_size=batch_size,
class_mode='categorical'
)
# Definir la entrada de la red neuronal con el tamaño de las imágenes
image_input = Input(shape=(width_shape, height_shape, 3))
# Cargar el modelo VGG16 preentrenado con pesos ajustados desde ImageNet
base_model = VGG16(input_tensor=image_input, include_top=True, weights='imagenet')
# Aplanar la salida del VGG16
x = Flatten(name='custom_flatten')(base_model.output)
# Añadir una nueva capa densa al final del modelo para la clasificación multiclase con regularización L2
out = Dense(num_classes, activation='softmax', kernel_regularizer='l2', name='custom_dense')(x)
# Crear un nuevo modelo personalizado que toma la entrada de la imagen y produce la salida clasificada
custom_vgg_model = Model(inputs=base_model.input, outputs=out)
# Congelar todas las capas del modelo base VGG16
for layer in base_model.layers:
layer.trainable = False
# Compilar el modelo con una función de pérdida, optimizador y métricas especificadas
custom_vgg_model.compile(loss='categorical_crossentropy', optimizer=Adam(learning_rate=learning_rate), metrics=['accuracy'])
# Mostrar un resumen del modelo que incluye la arquitectura y el número de parámetros
custom_vgg_model.summary()
# Medir el tiempo y el uso de CPU/memoria antes de entrenar
start_time = time.time()
start_cpu = psutil.cpu_percent(interval=None)
start_memory = psutil.virtual_memory().used
# Crear los callbacks para Early Stopping y guardar el mejor modelo
checkpoint = ModelCheckpoint('best_model.keras', monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')
early_stopping = EarlyStopping(monitor='val_accuracy', patience=10, verbose=1, restore_best_weights=True)
# Entrenar el modelo utilizando generadores de datos para el conjunto de entrenamiento y validación
model_history = custom_vgg_model.fit(
train_generator,
epochs=epochs,
validation_data=validation_generator,
steps_per_epoch=nb_train_samples // batch_size,
validation_steps=nb_validation_samples // batch_size,
callbacks=[checkpoint, early_stopping]
)
# Medir el tiempo y el uso de CPU/memoria después de entrenar
end_time = time.time()
end_cpu = psutil.cpu_percent(interval=None)
end_memory = psutil.virtual_memory().used
# Calcular métricas de tiempo y uso de recursos
elapsed_time = end_time - start_time
cpu_usage = end_cpu - start_cpu
memory_usage = end_memory - start_memory
print(f"Tiempo transcurrido para el entrenamiento: {elapsed_time} segundos")
print(f"Uso de CPU durante el entrenamiento: {cpu_usage}%")
print(f"Aumento en uso de memoria: {memory_usage / (1024 ** 3)} GB")
return model_history, elapsed_time, cpu_usage, memory_usage
# Definir rangos de búsqueda para hiperparámetros
learning_rates = [0.0001, 0.0005, 0.001]
l2_regularizations = [0.01, 0.05, 0.1]
batch_sizes = [16, 32, 64]
# Variables para almacenar los mejores hiperparámetros y su rendimiento
best_val_accuracy = 0
best_hyperparams = {}
# Realizar la búsqueda de cuadrícula
for learning_rate in learning_rates:
for l2_regularization in l2_regularizations:
for batch_size in batch_sizes:
# Crear y entrenar el modelo con los hiperparámetros actuales
model_history, elapsed_time, cpu_usage, memory_usage = create_and_train_vgg16_model(learning_rate, l2_regularization, batch_size)
# Obtener la mejor precisión de validación de esta combinación de hiperparámetros
val_accuracy = np.max(model_history.history['val_accuracy'])
# Imprimir los resultados
print(f"Resultados para lr={learning_rate}, l2={l2_regularization}, batch_size={batch_size}:")
print(f"Tiempo: {elapsed_time} segundos, CPU: {cpu_usage}%, Memoria: {memory_usage / (1024 ** 3)} GB")
print(f"Precisión de validación: {val_accuracy}")
# Actualizar los mejores hiperparámetros si la precisión de validación mejora
if val_accuracy > best_val_accuracy:
best_val_accuracy = val_accuracy
best_hyperparams = {
'learning_rate': learning_rate,
'l2_regularization': l2_regularization,
'batch_size': batch_size,
'val_accuracy': val_accuracy,
'elapsed_time': elapsed_time,
'cpu_usage': cpu_usage,
'memory_usage': memory_usage
}
# Imprimir los mejores hiperparámetros y su rendimiento
print("Mejores hiperparámetros encontrados:")
print(best_hyperparams)
Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_4"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_2 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.0766 - loss: 3.1955 Epoch 1: val_accuracy improved from -inf to 0.13451, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 365s 2s/step - accuracy: 0.0768 - loss: 3.1952 - val_accuracy: 0.1345 - val_loss: 3.0679 Epoch 2/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:24 2s/step - accuracy: 0.0625 - loss: 3.0668
C:\Users\Oscar Diaz\anaconda3\Lib\contextlib.py:158: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset. self.gen.throw(typ, value, traceback)
Epoch 2: val_accuracy did not improve from 0.13451 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.0625 - loss: 3.0668 - val_accuracy: 0.0000e+00 - val_loss: 3.0574 Epoch 3/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.1637 - loss: 3.0388 Epoch 3: val_accuracy improved from 0.13451 to 0.21467, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 361s 2s/step - accuracy: 0.1638 - loss: 3.0387 - val_accuracy: 0.2147 - val_loss: 2.9675 Epoch 4/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:15 2s/step - accuracy: 0.3125 - loss: 2.9683 Epoch 4: val_accuracy did not improve from 0.21467 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.3125 - loss: 2.9683 - val_accuracy: 0.0000e+00 - val_loss: 2.9687 Epoch 5/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.2599 - loss: 2.9509 Epoch 5: val_accuracy improved from 0.21467 to 0.34239, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 351s 2s/step - accuracy: 0.2601 - loss: 2.9508 - val_accuracy: 0.3424 - val_loss: 2.9119 Epoch 6/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:25 2s/step - accuracy: 0.5000 - loss: 2.9020 Epoch 6: val_accuracy did not improve from 0.34239 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.5000 - loss: 2.9020 - val_accuracy: 0.0000e+00 - val_loss: 2.9371 Epoch 7/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.3587 - loss: 2.9014 Epoch 7: val_accuracy improved from 0.34239 to 0.40625, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 353s 2s/step - accuracy: 0.3588 - loss: 2.9014 - val_accuracy: 0.4062 - val_loss: 2.8794 Epoch 8/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:15 2s/step - accuracy: 0.6250 - loss: 2.8735 Epoch 8: val_accuracy improved from 0.40625 to 0.50000, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 4s 16ms/step - accuracy: 0.6250 - loss: 2.8735 - val_accuracy: 0.5000 - val_loss: 2.8610 Epoch 9/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.4378 - loss: 2.8739 Epoch 9: val_accuracy did not improve from 0.50000 163/163 ━━━━━━━━━━━━━━━━━━━━ 349s 2s/step - accuracy: 0.4378 - loss: 2.8739 - val_accuracy: 0.4660 - val_loss: 2.8605 Epoch 10/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.3750 - loss: 2.8624 Epoch 10: val_accuracy did not improve from 0.50000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.3750 - loss: 2.8624 - val_accuracy: 0.5000 - val_loss: 2.8194 Epoch 11/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.4483 - loss: 2.8572 Epoch 11: val_accuracy did not improve from 0.50000 163/163 ━━━━━━━━━━━━━━━━━━━━ 351s 2s/step - accuracy: 0.4483 - loss: 2.8572 - val_accuracy: 0.4552 - val_loss: 2.8495 Epoch 12/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:15 2s/step - accuracy: 0.3125 - loss: 2.8489 Epoch 12: val_accuracy did not improve from 0.50000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.3125 - loss: 2.8489 - val_accuracy: 0.0000e+00 - val_loss: 2.8982 Epoch 13/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.4727 - loss: 2.8454 Epoch 13: val_accuracy did not improve from 0.50000 163/163 ━━━━━━━━━━━━━━━━━━━━ 357s 2s/step - accuracy: 0.4727 - loss: 2.8454 - val_accuracy: 0.4783 - val_loss: 2.8412 Epoch 14/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.5000 - loss: 2.8418 Epoch 14: val_accuracy did not improve from 0.50000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.5000 - loss: 2.8418 - val_accuracy: 0.0000e+00 - val_loss: 2.8537 Epoch 15/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.4959 - loss: 2.8371 Epoch 15: val_accuracy did not improve from 0.50000 163/163 ━━━━━━━━━━━━━━━━━━━━ 351s 2s/step - accuracy: 0.4960 - loss: 2.8371 - val_accuracy: 0.4891 - val_loss: 2.8344 Epoch 16/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.6875 - loss: 2.8479 Epoch 16: val_accuracy improved from 0.50000 to 1.00000, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 4s 14ms/step - accuracy: 0.6875 - loss: 2.8479 - val_accuracy: 1.0000 - val_loss: 2.7901 Epoch 17/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5275 - loss: 2.8315 Epoch 17: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 348s 2s/step - accuracy: 0.5275 - loss: 2.8315 - val_accuracy: 0.5204 - val_loss: 2.8290 Epoch 18/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:13 2s/step - accuracy: 0.5625 - loss: 2.8137 Epoch 18: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.5625 - loss: 2.8137 - val_accuracy: 0.0000e+00 - val_loss: 2.8259 Epoch 19/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5425 - loss: 2.8263 Epoch 19: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 356s 2s/step - accuracy: 0.5425 - loss: 2.8263 - val_accuracy: 0.5476 - val_loss: 2.8233 Epoch 20/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:18 2s/step - accuracy: 0.6250 - loss: 2.7994 Epoch 20: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.6250 - loss: 2.7994 - val_accuracy: 0.5000 - val_loss: 2.7774 Epoch 21/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5683 - loss: 2.8195 Epoch 21: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 350s 2s/step - accuracy: 0.5683 - loss: 2.8195 - val_accuracy: 0.5516 - val_loss: 2.8182 Epoch 22/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.5625 - loss: 2.8063 Epoch 22: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.5625 - loss: 2.8063 - val_accuracy: 1.0000 - val_loss: 2.7622 Epoch 23/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5696 - loss: 2.8150 Epoch 23: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 350s 2s/step - accuracy: 0.5695 - loss: 2.8150 - val_accuracy: 0.5516 - val_loss: 2.8129 Epoch 24/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:30 2s/step - accuracy: 0.6875 - loss: 2.7975 Epoch 24: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.6875 - loss: 2.7975 - val_accuracy: 0.5000 - val_loss: 2.8173 Epoch 25/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5609 - loss: 2.8110 Epoch 25: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 360s 2s/step - accuracy: 0.5609 - loss: 2.8110 - val_accuracy: 0.5679 - val_loss: 2.8098 Epoch 26/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:15 2s/step - accuracy: 0.5000 - loss: 2.8230 Epoch 26: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.5000 - loss: 2.8230 - val_accuracy: 0.0000e+00 - val_loss: 2.8437 Epoch 26: early stopping Restoring model weights from the end of the best epoch: 16. Tiempo transcurrido para el entrenamiento: 4630.410501003265 segundos Uso de CPU durante el entrenamiento: 46.2% Aumento en uso de memoria: -0.24663543701171875 GB Resultados para lr=0.0001, l2=0.01, batch_size=16: Tiempo: 4630.410501003265 segundos, CPU: 46.2%, Memoria: -0.24663543701171875 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_6"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_3 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.0962 - loss: 3.2078 Epoch 1: val_accuracy improved from -inf to 0.10598, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 346s 4s/step - accuracy: 0.0961 - loss: 3.2075 - val_accuracy: 0.1060 - val_loss: 3.1354 Epoch 2/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:01 3s/step - accuracy: 0.1875 - loss: 3.1282 Epoch 2: val_accuracy did not improve from 0.10598 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.1875 - loss: 3.1282 - val_accuracy: 0.0000e+00 - val_loss: 3.1432 Epoch 3/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.1299 - loss: 3.1149 Epoch 3: val_accuracy improved from 0.10598 to 0.13587, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 347s 4s/step - accuracy: 0.1300 - loss: 3.1147 - val_accuracy: 0.1359 - val_loss: 3.0595 Epoch 4/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 3:59 3s/step - accuracy: 0.1250 - loss: 3.0502 Epoch 4: val_accuracy did not improve from 0.13587 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.1250 - loss: 3.0502 - val_accuracy: 0.0000e+00 - val_loss: 3.0644 Epoch 5/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.1841 - loss: 3.0432 Epoch 5: val_accuracy improved from 0.13587 to 0.21196, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 345s 4s/step - accuracy: 0.1843 - loss: 3.0430 - val_accuracy: 0.2120 - val_loss: 3.0020 Epoch 6/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:09 3s/step - accuracy: 0.2812 - loss: 2.9933 Epoch 6: val_accuracy did not improve from 0.21196 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.2812 - loss: 2.9933 - val_accuracy: 0.0000e+00 - val_loss: 3.0296 Epoch 7/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.2350 - loss: 2.9884 Epoch 7: val_accuracy improved from 0.21196 to 0.25000, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 344s 4s/step - accuracy: 0.2351 - loss: 2.9882 - val_accuracy: 0.2500 - val_loss: 2.9585 Epoch 8/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:06 3s/step - accuracy: 0.4375 - loss: 2.9432 Epoch 8: val_accuracy did not improve from 0.25000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.4375 - loss: 2.9432 - val_accuracy: 0.0000e+00 - val_loss: 3.0041 Epoch 9/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.2725 - loss: 2.9482 Epoch 9: val_accuracy did not improve from 0.25000 81/81 ━━━━━━━━━━━━━━━━━━━━ 342s 4s/step - accuracy: 0.2724 - loss: 2.9481 - val_accuracy: 0.2459 - val_loss: 2.9271 Epoch 10/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:03 3s/step - accuracy: 0.2500 - loss: 2.9240 Epoch 10: val_accuracy did not improve from 0.25000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.2500 - loss: 2.9240 - val_accuracy: 0.0000e+00 - val_loss: 2.9464 Epoch 11/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.3018 - loss: 2.9175 Epoch 11: val_accuracy improved from 0.25000 to 0.30163, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 346s 4s/step - accuracy: 0.3017 - loss: 2.9175 - val_accuracy: 0.3016 - val_loss: 2.9026 Epoch 12/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:07 3s/step - accuracy: 0.4688 - loss: 2.9051 Epoch 12: val_accuracy improved from 0.30163 to 1.00000, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 6s 31ms/step - accuracy: 0.4688 - loss: 2.9051 - val_accuracy: 1.0000 - val_loss: 2.8801 Epoch 13/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.3486 - loss: 2.8968 Epoch 13: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 352s 4s/step - accuracy: 0.3487 - loss: 2.8967 - val_accuracy: 0.3614 - val_loss: 2.8845 Epoch 14/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:06 3s/step - accuracy: 0.3750 - loss: 2.8825 Epoch 14: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.3750 - loss: 2.8825 - val_accuracy: 0.5000 - val_loss: 2.8587 Epoch 15/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.3892 - loss: 2.8784 Epoch 15: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 343s 4s/step - accuracy: 0.3892 - loss: 2.8784 - val_accuracy: 0.3668 - val_loss: 2.8707 Epoch 16/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:07 3s/step - accuracy: 0.4375 - loss: 2.8664 Epoch 16: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 5ms/step - accuracy: 0.4375 - loss: 2.8664 - val_accuracy: 1.0000 - val_loss: 2.8524 Epoch 17/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.3922 - loss: 2.8671 Epoch 17: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 342s 4s/step - accuracy: 0.3924 - loss: 2.8670 - val_accuracy: 0.3967 - val_loss: 2.8595 Epoch 18/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:34 3s/step - accuracy: 0.3750 - loss: 2.8577 Epoch 18: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - accuracy: 0.3750 - loss: 2.8577 - val_accuracy: 0.0000e+00 - val_loss: 2.8647 Epoch 19/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.4448 - loss: 2.8548 Epoch 19: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 345s 4s/step - accuracy: 0.4446 - loss: 2.8548 - val_accuracy: 0.4103 - val_loss: 2.8509 Epoch 20/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:13 3s/step - accuracy: 0.4375 - loss: 2.8489 Epoch 20: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - accuracy: 0.4375 - loss: 2.8489 - val_accuracy: 0.5000 - val_loss: 2.8608 Epoch 21/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.4260 - loss: 2.8478 Epoch 21: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 342s 4s/step - accuracy: 0.4262 - loss: 2.8478 - val_accuracy: 0.4552 - val_loss: 2.8449 Epoch 22/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:06 3s/step - accuracy: 0.3750 - loss: 2.8640 Epoch 22: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.3750 - loss: 2.8640 - val_accuracy: 0.5000 - val_loss: 2.8430 Epoch 22: early stopping Restoring model weights from the end of the best epoch: 12. Tiempo transcurrido para el entrenamiento: 3835.0894253253937 segundos Uso de CPU durante el entrenamiento: 55.2% Aumento en uso de memoria: -0.19287872314453125 GB Resultados para lr=0.0001, l2=0.01, batch_size=32: Tiempo: 3835.0894253253937 segundos, CPU: 55.2%, Memoria: -0.19287872314453125 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_8"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_4 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.0637 - loss: 3.2282 Epoch 1: val_accuracy improved from -inf to 0.09375, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 356s 9s/step - accuracy: 0.0640 - loss: 3.2278 - val_accuracy: 0.0938 - val_loss: 3.1871 Epoch 2/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:09 6s/step - accuracy: 0.0469 - loss: 3.1898 Epoch 2: val_accuracy did not improve from 0.09375 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 87ms/step - accuracy: 0.0469 - loss: 3.1898 - val_accuracy: 0.0294 - val_loss: 3.1986 Epoch 3/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.0889 - loss: 3.1741 Epoch 3: val_accuracy improved from 0.09375 to 0.13778, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 339s 8s/step - accuracy: 0.0893 - loss: 3.1738 - val_accuracy: 0.1378 - val_loss: 3.1378 Epoch 4/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:32 7s/step - accuracy: 0.1562 - loss: 3.1376 Epoch 4: val_accuracy improved from 0.13778 to 0.23529, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 13s 156ms/step - accuracy: 0.1562 - loss: 3.1376 - val_accuracy: 0.2353 - val_loss: 3.1327 Epoch 5/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.1497 - loss: 3.1260 Epoch 5: val_accuracy did not improve from 0.23529 40/40 ━━━━━━━━━━━━━━━━━━━━ 343s 8s/step - accuracy: 0.1498 - loss: 3.1257 - val_accuracy: 0.1719 - val_loss: 3.0958 Epoch 6/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:09 6s/step - accuracy: 0.2812 - loss: 3.0909 Epoch 6: val_accuracy did not improve from 0.23529 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 85ms/step - accuracy: 0.2812 - loss: 3.0909 - val_accuracy: 0.1176 - val_loss: 3.0941 Epoch 7/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.1644 - loss: 3.0859 Epoch 7: val_accuracy did not improve from 0.23529 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.1649 - loss: 3.0856 - val_accuracy: 0.1932 - val_loss: 3.0592 Epoch 8/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:15 7s/step - accuracy: 0.2344 - loss: 3.0572 Epoch 8: val_accuracy did not improve from 0.23529 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 89ms/step - accuracy: 0.2344 - loss: 3.0572 - val_accuracy: 0.1765 - val_loss: 3.0660 Epoch 9/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.2263 - loss: 3.0496 Epoch 9: val_accuracy did not improve from 0.23529 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.2263 - loss: 3.0494 - val_accuracy: 0.2287 - val_loss: 3.0282 Epoch 10/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:03 6s/step - accuracy: 0.1406 - loss: 3.0338 Epoch 10: val_accuracy improved from 0.23529 to 0.26471, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 141ms/step - accuracy: 0.1406 - loss: 3.0338 - val_accuracy: 0.2647 - val_loss: 3.0259 Epoch 11/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.2330 - loss: 3.0201 Epoch 11: val_accuracy did not improve from 0.26471 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.2332 - loss: 3.0199 - val_accuracy: 0.2585 - val_loss: 2.9999 Epoch 12/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:15 7s/step - accuracy: 0.2656 - loss: 2.9968 Epoch 12: val_accuracy improved from 0.26471 to 0.29412, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 13s 161ms/step - accuracy: 0.2656 - loss: 2.9968 - val_accuracy: 0.2941 - val_loss: 2.9980 Epoch 13/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.2734 - loss: 2.9926 Epoch 13: val_accuracy did not improve from 0.29412 40/40 ━━━━━━━━━━━━━━━━━━━━ 340s 8s/step - accuracy: 0.2735 - loss: 2.9924 - val_accuracy: 0.2926 - val_loss: 2.9758 Epoch 14/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:58 6s/step - accuracy: 0.2969 - loss: 2.9771 Epoch 14: val_accuracy did not improve from 0.29412 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 87ms/step - accuracy: 0.2969 - loss: 2.9771 - val_accuracy: 0.2059 - val_loss: 2.9774 Epoch 15/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.2819 - loss: 2.9708 Epoch 15: val_accuracy did not improve from 0.29412 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.2820 - loss: 2.9707 - val_accuracy: 0.2827 - val_loss: 2.9567 Epoch 16/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:17 7s/step - accuracy: 0.2969 - loss: 2.9522 Epoch 16: val_accuracy improved from 0.29412 to 0.38235, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 13s 153ms/step - accuracy: 0.2969 - loss: 2.9522 - val_accuracy: 0.3824 - val_loss: 2.9540 Epoch 17/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.3017 - loss: 2.9496 Epoch 17: val_accuracy did not improve from 0.38235 40/40 ━━━━━━━━━━━━━━━━━━━━ 344s 8s/step - accuracy: 0.3018 - loss: 2.9495 - val_accuracy: 0.3068 - val_loss: 2.9375 Epoch 18/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:04 6s/step - accuracy: 0.2500 - loss: 2.9362 Epoch 18: val_accuracy did not improve from 0.38235 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 86ms/step - accuracy: 0.2500 - loss: 2.9362 - val_accuracy: 0.2059 - val_loss: 2.9456 Epoch 19/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.3082 - loss: 2.9340 Epoch 19: val_accuracy did not improve from 0.38235 40/40 ━━━━━━━━━━━━━━━━━━━━ 341s 8s/step - accuracy: 0.3085 - loss: 2.9339 - val_accuracy: 0.3509 - val_loss: 2.9224 Epoch 20/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:59 6s/step - accuracy: 0.3594 - loss: 2.9156 Epoch 20: val_accuracy did not improve from 0.38235 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 85ms/step - accuracy: 0.3594 - loss: 2.9156 - val_accuracy: 0.2941 - val_loss: 2.9260 Epoch 21/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.3634 - loss: 2.9164 Epoch 21: val_accuracy did not improve from 0.38235 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.3635 - loss: 2.9163 - val_accuracy: 0.3509 - val_loss: 2.9096 Epoch 22/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:06 6s/step - accuracy: 0.3281 - loss: 2.9138 Epoch 22: val_accuracy did not improve from 0.38235 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 94ms/step - accuracy: 0.3281 - loss: 2.9138 - val_accuracy: 0.3529 - val_loss: 2.9083 Epoch 23/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.3835 - loss: 2.9046 Epoch 23: val_accuracy did not improve from 0.38235 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.3835 - loss: 2.9046 - val_accuracy: 0.3693 - val_loss: 2.8968 Epoch 24/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:54 6s/step - accuracy: 0.2656 - loss: 2.9035 Epoch 24: val_accuracy improved from 0.38235 to 0.41176, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 143ms/step - accuracy: 0.2656 - loss: 2.9035 - val_accuracy: 0.4118 - val_loss: 2.8977 Epoch 25/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.3804 - loss: 2.8928 Epoch 25: val_accuracy did not improve from 0.41176 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.3805 - loss: 2.8928 - val_accuracy: 0.4006 - val_loss: 2.8877 Epoch 26/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:01 6s/step - accuracy: 0.4688 - loss: 2.8915 Epoch 26: val_accuracy did not improve from 0.41176 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 86ms/step - accuracy: 0.4688 - loss: 2.8915 - val_accuracy: 0.3235 - val_loss: 2.8900 Epoch 27/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4233 - loss: 2.8832 Epoch 27: val_accuracy improved from 0.41176 to 0.41903, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.4235 - loss: 2.8831 - val_accuracy: 0.4190 - val_loss: 2.8796 Epoch 28/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:06 6s/step - accuracy: 0.4375 - loss: 2.8755 Epoch 28: val_accuracy did not improve from 0.41903 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 87ms/step - accuracy: 0.4375 - loss: 2.8755 - val_accuracy: 0.4118 - val_loss: 2.8802 Epoch 29/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4318 - loss: 2.8751 Epoch 29: val_accuracy improved from 0.41903 to 0.48864, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 347s 9s/step - accuracy: 0.4325 - loss: 2.8750 - val_accuracy: 0.4886 - val_loss: 2.8711 Epoch 30/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:48 7s/step - accuracy: 0.4219 - loss: 2.8805 Epoch 30: val_accuracy did not improve from 0.48864 40/40 ━━━━━━━━━━━━━━━━━━━━ 11s 93ms/step - accuracy: 0.4219 - loss: 2.8805 - val_accuracy: 0.4118 - val_loss: 2.8805 Epoch 31/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.4551 - loss: 2.8678 Epoch 31: val_accuracy did not improve from 0.48864 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.4553 - loss: 2.8678 - val_accuracy: 0.4645 - val_loss: 2.8628 Epoch 32/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:06 6s/step - accuracy: 0.5156 - loss: 2.8593 Epoch 32: val_accuracy did not improve from 0.48864 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 85ms/step - accuracy: 0.5156 - loss: 2.8593 - val_accuracy: 0.4118 - val_loss: 2.8724 Epoch 33/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.4645 - loss: 2.8606 Epoch 33: val_accuracy did not improve from 0.48864 40/40 ━━━━━━━━━━━━━━━━━━━━ 340s 8s/step - accuracy: 0.4649 - loss: 2.8606 - val_accuracy: 0.4716 - val_loss: 2.8583 Epoch 34/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:13 6s/step - accuracy: 0.5938 - loss: 2.8492 Epoch 34: val_accuracy improved from 0.48864 to 0.52941, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 140ms/step - accuracy: 0.5938 - loss: 2.8492 - val_accuracy: 0.5294 - val_loss: 2.8563 Epoch 35/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5063 - loss: 2.8539 Epoch 35: val_accuracy did not improve from 0.52941 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.5062 - loss: 2.8539 - val_accuracy: 0.4773 - val_loss: 2.8527 Epoch 36/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:04 6s/step - accuracy: 0.4062 - loss: 2.8470 Epoch 36: val_accuracy improved from 0.52941 to 0.55882, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 159ms/step - accuracy: 0.4062 - loss: 2.8470 - val_accuracy: 0.5588 - val_loss: 2.8492 Epoch 37/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.4778 - loss: 2.8506 Epoch 37: val_accuracy did not improve from 0.55882 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.4783 - loss: 2.8506 - val_accuracy: 0.4815 - val_loss: 2.8484 Epoch 38/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:57 6s/step - accuracy: 0.5469 - loss: 2.8418 Epoch 38: val_accuracy did not improve from 0.55882 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 96ms/step - accuracy: 0.5469 - loss: 2.8418 - val_accuracy: 0.5294 - val_loss: 2.8515 Epoch 39/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5024 - loss: 2.8463 Epoch 39: val_accuracy did not improve from 0.55882 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.5027 - loss: 2.8463 - val_accuracy: 0.5312 - val_loss: 2.8417 Epoch 40/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:01 6s/step - accuracy: 0.4062 - loss: 2.8479 Epoch 40: val_accuracy did not improve from 0.55882 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 84ms/step - accuracy: 0.4062 - loss: 2.8479 - val_accuracy: 0.5588 - val_loss: 2.8566 Epoch 41/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5246 - loss: 2.8414 Epoch 41: val_accuracy did not improve from 0.55882 40/40 ━━━━━━━━━━━━━━━━━━━━ 339s 8s/step - accuracy: 0.5244 - loss: 2.8413 - val_accuracy: 0.5014 - val_loss: 2.8398 Epoch 42/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:15 7s/step - accuracy: 0.5000 - loss: 2.8380 Epoch 42: val_accuracy did not improve from 0.55882 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 99ms/step - accuracy: 0.5000 - loss: 2.8380 - val_accuracy: 0.5294 - val_loss: 2.8391 Epoch 43/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5340 - loss: 2.8378 Epoch 43: val_accuracy did not improve from 0.55882 40/40 ━━━━━━━━━━━━━━━━━━━━ 345s 9s/step - accuracy: 0.5337 - loss: 2.8378 - val_accuracy: 0.5028 - val_loss: 2.8368 Epoch 44/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:59 6s/step - accuracy: 0.5781 - loss: 2.8277 Epoch 44: val_accuracy improved from 0.55882 to 0.58824, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 138ms/step - accuracy: 0.5781 - loss: 2.8277 - val_accuracy: 0.5882 - val_loss: 2.8429 Epoch 45/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5235 - loss: 2.8362 Epoch 45: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.5236 - loss: 2.8361 - val_accuracy: 0.5241 - val_loss: 2.8325 Epoch 46/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:09 6s/step - accuracy: 0.5000 - loss: 2.8399 Epoch 46: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 86ms/step - accuracy: 0.5000 - loss: 2.8399 - val_accuracy: 0.5588 - val_loss: 2.8369 Epoch 47/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5276 - loss: 2.8321 Epoch 47: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.5277 - loss: 2.8321 - val_accuracy: 0.5142 - val_loss: 2.8300 Epoch 48/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:57 6s/step - accuracy: 0.5000 - loss: 2.8242 Epoch 48: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 11s 133ms/step - accuracy: 0.5000 - loss: 2.8242 - val_accuracy: 0.5294 - val_loss: 2.8356 Epoch 49/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5265 - loss: 2.8284 Epoch 49: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.5266 - loss: 2.8284 - val_accuracy: 0.5398 - val_loss: 2.8271 Epoch 50/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:04 6s/step - accuracy: 0.5000 - loss: 2.8265 Epoch 50: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 85ms/step - accuracy: 0.5000 - loss: 2.8265 - val_accuracy: 0.5294 - val_loss: 2.8228 Restoring model weights from the end of the best epoch: 44. Tiempo transcurrido para el entrenamiento: 8753.506147384644 segundos Uso de CPU durante el entrenamiento: 43.8% Aumento en uso de memoria: 0.314056396484375 GB Resultados para lr=0.0001, l2=0.01, batch_size=64: Tiempo: 8753.506147384644 segundos, CPU: 43.8%, Memoria: 0.314056396484375 GB Precisión de validación: 0.5882353186607361 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_10"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_5 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.1113 - loss: 3.1941 Epoch 1: val_accuracy improved from -inf to 0.20516, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 357s 2s/step - accuracy: 0.1115 - loss: 3.1938 - val_accuracy: 0.2052 - val_loss: 3.0679 Epoch 2/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 6:39 2s/step - accuracy: 0.2500 - loss: 3.0733 Epoch 2: val_accuracy did not improve from 0.20516 163/163 ━━━━━━━━━━━━━━━━━━━━ 3s 3ms/step - accuracy: 0.2500 - loss: 3.0733 - val_accuracy: 0.0000e+00 - val_loss: 3.0653 Epoch 3/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.2256 - loss: 3.0380 Epoch 3: val_accuracy improved from 0.20516 to 0.28261, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 356s 2s/step - accuracy: 0.2257 - loss: 3.0378 - val_accuracy: 0.2826 - val_loss: 2.9675 Epoch 4/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:08 2s/step - accuracy: 0.1875 - loss: 2.9712 Epoch 4: val_accuracy did not improve from 0.28261 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.1875 - loss: 2.9712 - val_accuracy: 0.0000e+00 - val_loss: 2.9652 Epoch 5/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.2843 - loss: 2.9506 Epoch 5: val_accuracy improved from 0.28261 to 0.33967, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 352s 2s/step - accuracy: 0.2843 - loss: 2.9505 - val_accuracy: 0.3397 - val_loss: 2.9104 Epoch 6/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:05 2s/step - accuracy: 0.4375 - loss: 2.9084 Epoch 6: val_accuracy improved from 0.33967 to 0.50000, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 4s 13ms/step - accuracy: 0.4375 - loss: 2.9084 - val_accuracy: 0.5000 - val_loss: 2.8842 Epoch 7/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.3414 - loss: 2.9013 Epoch 7: val_accuracy did not improve from 0.50000 163/163 ━━━━━━━━━━━━━━━━━━━━ 357s 2s/step - accuracy: 0.3415 - loss: 2.9012 - val_accuracy: 0.4117 - val_loss: 2.8783 Epoch 8/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:23 2s/step - accuracy: 0.5000 - loss: 2.8763 Epoch 8: val_accuracy did not improve from 0.50000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.5000 - loss: 2.8763 - val_accuracy: 0.0000e+00 - val_loss: 2.9139 Epoch 9/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.3987 - loss: 2.8723 Epoch 9: val_accuracy did not improve from 0.50000 163/163 ━━━━━━━━━━━━━━━━━━━━ 349s 2s/step - accuracy: 0.3988 - loss: 2.8723 - val_accuracy: 0.4266 - val_loss: 2.8597 Epoch 10/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.2500 - loss: 2.8586 Epoch 10: val_accuracy improved from 0.50000 to 1.00000, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 4s 14ms/step - accuracy: 0.2500 - loss: 2.8586 - val_accuracy: 1.0000 - val_loss: 2.8077 Epoch 11/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.4308 - loss: 2.8545 Epoch 11: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 349s 2s/step - accuracy: 0.4308 - loss: 2.8545 - val_accuracy: 0.4429 - val_loss: 2.8485 Epoch 12/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:20 2s/step - accuracy: 0.4375 - loss: 2.8701 Epoch 12: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.4375 - loss: 2.8701 - val_accuracy: 1.0000 - val_loss: 2.8724 Epoch 13/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.4579 - loss: 2.8433 Epoch 13: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 353s 2s/step - accuracy: 0.4579 - loss: 2.8433 - val_accuracy: 0.4484 - val_loss: 2.8391 Epoch 14/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.2500 - loss: 2.8536 Epoch 14: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.2500 - loss: 2.8536 - val_accuracy: 0.5000 - val_loss: 2.8715 Epoch 15/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.4573 - loss: 2.8368 Epoch 15: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 350s 2s/step - accuracy: 0.4573 - loss: 2.8368 - val_accuracy: 0.4769 - val_loss: 2.8322 Epoch 16/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:08 2s/step - accuracy: 0.5000 - loss: 2.8358 Epoch 16: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.5000 - loss: 2.8358 - val_accuracy: 0.5000 - val_loss: 2.8970 Epoch 17/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.4739 - loss: 2.8314 Epoch 17: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 350s 2s/step - accuracy: 0.4739 - loss: 2.8314 - val_accuracy: 0.4837 - val_loss: 2.8269 Epoch 18/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.3750 - loss: 2.8284 Epoch 18: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.3750 - loss: 2.8284 - val_accuracy: 0.5000 - val_loss: 2.8102 Epoch 19/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.4811 - loss: 2.8263 Epoch 19: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 357s 2s/step - accuracy: 0.4810 - loss: 2.8263 - val_accuracy: 0.4769 - val_loss: 2.8212 Epoch 20/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:15 2s/step - accuracy: 0.3125 - loss: 2.8236 Epoch 20: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.3125 - loss: 2.8236 - val_accuracy: 0.0000e+00 - val_loss: 2.7921 Epoch 20: early stopping Restoring model weights from the end of the best epoch: 10. Tiempo transcurrido para el entrenamiento: 3554.6360399723053 segundos Uso de CPU durante el entrenamiento: 56.0% Aumento en uso de memoria: -1.0743408203125 GB Resultados para lr=0.0001, l2=0.05, batch_size=16: Tiempo: 3554.6360399723053 segundos, CPU: 56.0%, Memoria: -1.0743408203125 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_12"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_6 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.0846 - loss: 3.2216 Epoch 1: val_accuracy improved from -inf to 0.12228, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 348s 4s/step - accuracy: 0.0847 - loss: 3.2213 - val_accuracy: 0.1223 - val_loss: 3.1471 Epoch 2/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:04 3s/step - accuracy: 0.0625 - loss: 3.1451 Epoch 2: val_accuracy improved from 0.12228 to 0.50000, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 5s 27ms/step - accuracy: 0.0625 - loss: 3.1451 - val_accuracy: 0.5000 - val_loss: 3.1167 Epoch 3/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.1387 - loss: 3.1243 Epoch 3: val_accuracy did not improve from 0.50000 81/81 ━━━━━━━━━━━━━━━━━━━━ 347s 4s/step - accuracy: 0.1389 - loss: 3.1241 - val_accuracy: 0.1753 - val_loss: 3.0696 Epoch 4/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:14 3s/step - accuracy: 0.0938 - loss: 3.0729 Epoch 4: val_accuracy did not improve from 0.50000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.0938 - loss: 3.0729 - val_accuracy: 0.0000e+00 - val_loss: 3.0729 Epoch 5/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.1962 - loss: 3.0523 Epoch 5: val_accuracy did not improve from 0.50000 81/81 ━━━━━━━━━━━━━━━━━━━━ 343s 4s/step - accuracy: 0.1964 - loss: 3.0521 - val_accuracy: 0.2242 - val_loss: 3.0106 Epoch 6/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:04 3s/step - accuracy: 0.1250 - loss: 3.0209 Epoch 6: val_accuracy did not improve from 0.50000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.1250 - loss: 3.0209 - val_accuracy: 0.5000 - val_loss: 2.9899 Epoch 7/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.2342 - loss: 2.9975 Epoch 7: val_accuracy did not improve from 0.50000 81/81 ━━━━━━━━━━━━━━━━━━━━ 342s 4s/step - accuracy: 0.2344 - loss: 2.9974 - val_accuracy: 0.2527 - val_loss: 2.9667 Epoch 8/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:08 3s/step - accuracy: 0.3438 - loss: 2.9624 Epoch 8: val_accuracy did not improve from 0.50000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.3438 - loss: 2.9624 - val_accuracy: 0.5000 - val_loss: 2.9529 Epoch 9/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.2962 - loss: 2.9548 Epoch 9: val_accuracy did not improve from 0.50000 81/81 ━━━━━━━━━━━━━━━━━━━━ 345s 4s/step - accuracy: 0.2961 - loss: 2.9547 - val_accuracy: 0.2867 - val_loss: 2.9333 Epoch 10/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:17 3s/step - accuracy: 0.3438 - loss: 2.9273 Epoch 10: val_accuracy did not improve from 0.50000 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - accuracy: 0.3438 - loss: 2.9273 - val_accuracy: 0.0000e+00 - val_loss: 2.9402 Epoch 11/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.3170 - loss: 2.9251 Epoch 11: val_accuracy did not improve from 0.50000 81/81 ━━━━━━━━━━━━━━━━━━━━ 342s 4s/step - accuracy: 0.3171 - loss: 2.9250 - val_accuracy: 0.3356 - val_loss: 2.9085 Epoch 12/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:03 3s/step - accuracy: 0.4688 - loss: 2.9038 Epoch 12: val_accuracy did not improve from 0.50000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.4688 - loss: 2.9038 - val_accuracy: 0.5000 - val_loss: 2.9244 Epoch 12: early stopping Restoring model weights from the end of the best epoch: 2. Tiempo transcurrido para el entrenamiento: 2090.1293630599976 segundos Uso de CPU durante el entrenamiento: 35.7% Aumento en uso de memoria: -0.5012664794921875 GB Resultados para lr=0.0001, l2=0.05, batch_size=32: Tiempo: 2090.1293630599976 segundos, CPU: 35.7%, Memoria: -0.5012664794921875 GB Precisión de validación: 0.5 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_14"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_7 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.0560 - loss: 3.2283 Epoch 1: val_accuracy improved from -inf to 0.08665, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 362s 9s/step - accuracy: 0.0561 - loss: 3.2279 - val_accuracy: 0.0866 - val_loss: 3.1875 Epoch 2/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:07 6s/step - accuracy: 0.0781 - loss: 3.1928 Epoch 2: val_accuracy improved from 0.08665 to 0.11765, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 13s 182ms/step - accuracy: 0.0781 - loss: 3.1928 - val_accuracy: 0.1176 - val_loss: 3.1972 Epoch 3/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.0993 - loss: 3.1750 Epoch 3: val_accuracy improved from 0.11765 to 0.14062, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 341s 8s/step - accuracy: 0.0997 - loss: 3.1747 - val_accuracy: 0.1406 - val_loss: 3.1395 Epoch 4/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:14 7s/step - accuracy: 0.0781 - loss: 3.1400 Epoch 4: val_accuracy improved from 0.14062 to 0.17647, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 13s 154ms/step - accuracy: 0.0781 - loss: 3.1400 - val_accuracy: 0.1765 - val_loss: 3.1351 Epoch 5/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.1304 - loss: 3.1284 Epoch 5: val_accuracy did not improve from 0.17647 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.1306 - loss: 3.1282 - val_accuracy: 0.1378 - val_loss: 3.0979 Epoch 6/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:05 6s/step - accuracy: 0.1562 - loss: 3.0968 Epoch 6: val_accuracy did not improve from 0.17647 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 90ms/step - accuracy: 0.1562 - loss: 3.0968 - val_accuracy: 0.1471 - val_loss: 3.0951 Epoch 7/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.1588 - loss: 3.0876 Epoch 7: val_accuracy improved from 0.17647 to 0.21449, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.1593 - loss: 3.0873 - val_accuracy: 0.2145 - val_loss: 3.0602 Epoch 8/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:03 6s/step - accuracy: 0.2031 - loss: 3.0544 Epoch 8: val_accuracy did not improve from 0.21449 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 84ms/step - accuracy: 0.2031 - loss: 3.0544 - val_accuracy: 0.1765 - val_loss: 3.0659 Epoch 9/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.2059 - loss: 3.0523 Epoch 9: val_accuracy improved from 0.21449 to 0.25710, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 340s 8s/step - accuracy: 0.2061 - loss: 3.0521 - val_accuracy: 0.2571 - val_loss: 3.0281 Epoch 10/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:58 6s/step - accuracy: 0.2812 - loss: 3.0274 Epoch 10: val_accuracy did not improve from 0.25710 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 84ms/step - accuracy: 0.2812 - loss: 3.0274 - val_accuracy: 0.2353 - val_loss: 3.0325 Epoch 11/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.2654 - loss: 3.0205 Epoch 11: val_accuracy improved from 0.25710 to 0.28693, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 341s 8s/step - accuracy: 0.2656 - loss: 3.0204 - val_accuracy: 0.2869 - val_loss: 3.0010 Epoch 12/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:42 7s/step - accuracy: 0.2500 - loss: 3.0018 Epoch 12: val_accuracy did not improve from 0.28693 40/40 ━━━━━━━━━━━━━━━━━━━━ 11s 97ms/step - accuracy: 0.2500 - loss: 3.0018 - val_accuracy: 0.2647 - val_loss: 3.0013 Epoch 13/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.2995 - loss: 2.9953 Epoch 13: val_accuracy improved from 0.28693 to 0.30824, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 340s 8s/step - accuracy: 0.2994 - loss: 2.9952 - val_accuracy: 0.3082 - val_loss: 2.9767 Epoch 14/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:04 6s/step - accuracy: 0.2812 - loss: 2.9798 Epoch 14: val_accuracy did not improve from 0.30824 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 89ms/step - accuracy: 0.2812 - loss: 2.9798 - val_accuracy: 0.2353 - val_loss: 2.9869 Epoch 15/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.3038 - loss: 2.9719 Epoch 15: val_accuracy improved from 0.30824 to 0.31676, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 347s 9s/step - accuracy: 0.3039 - loss: 2.9718 - val_accuracy: 0.3168 - val_loss: 2.9564 Epoch 16/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:05 6s/step - accuracy: 0.3750 - loss: 2.9548 Epoch 16: val_accuracy did not improve from 0.31676 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 93ms/step - accuracy: 0.3750 - loss: 2.9548 - val_accuracy: 0.2647 - val_loss: 2.9630 Epoch 17/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.3335 - loss: 2.9509 Epoch 17: val_accuracy improved from 0.31676 to 0.33381, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 339s 8s/step - accuracy: 0.3332 - loss: 2.9508 - val_accuracy: 0.3338 - val_loss: 2.9390 Epoch 18/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:01 6s/step - accuracy: 0.3125 - loss: 2.9390 Epoch 18: val_accuracy improved from 0.33381 to 0.35294, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 143ms/step - accuracy: 0.3125 - loss: 2.9390 - val_accuracy: 0.3529 - val_loss: 2.9348 Epoch 19/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.3304 - loss: 2.9348 Epoch 19: val_accuracy improved from 0.35294 to 0.37358, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 339s 8s/step - accuracy: 0.3307 - loss: 2.9347 - val_accuracy: 0.3736 - val_loss: 2.9227 Epoch 20/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:59 6s/step - accuracy: 0.3594 - loss: 2.9195 Epoch 20: val_accuracy did not improve from 0.37358 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 88ms/step - accuracy: 0.3594 - loss: 2.9195 - val_accuracy: 0.2647 - val_loss: 2.9281 Epoch 21/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.3927 - loss: 2.9194 Epoch 21: val_accuracy improved from 0.37358 to 0.38920, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 342s 8s/step - accuracy: 0.3924 - loss: 2.9193 - val_accuracy: 0.3892 - val_loss: 2.9089 Epoch 22/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:19 7s/step - accuracy: 0.3594 - loss: 2.9123 Epoch 22: val_accuracy did not improve from 0.38920 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 88ms/step - accuracy: 0.3594 - loss: 2.9123 - val_accuracy: 0.3529 - val_loss: 2.9151 Epoch 23/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.3828 - loss: 2.9063 Epoch 23: val_accuracy improved from 0.38920 to 0.41051, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 340s 8s/step - accuracy: 0.3834 - loss: 2.9063 - val_accuracy: 0.4105 - val_loss: 2.8982 Epoch 24/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:17 7s/step - accuracy: 0.3906 - loss: 2.8971 Epoch 24: val_accuracy improved from 0.41051 to 0.44118, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 145ms/step - accuracy: 0.3906 - loss: 2.8971 - val_accuracy: 0.4412 - val_loss: 2.8982 Epoch 25/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.4000 - loss: 2.8953 Epoch 25: val_accuracy did not improve from 0.44118 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.4002 - loss: 2.8952 - val_accuracy: 0.4105 - val_loss: 2.8882 Epoch 26/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:17 7s/step - accuracy: 0.2969 - loss: 2.8918 Epoch 26: val_accuracy improved from 0.44118 to 0.50000, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 151ms/step - accuracy: 0.2969 - loss: 2.8918 - val_accuracy: 0.5000 - val_loss: 2.8780 Epoch 27/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.4405 - loss: 2.8842 Epoch 27: val_accuracy did not improve from 0.50000 40/40 ━━━━━━━━━━━━━━━━━━━━ 345s 9s/step - accuracy: 0.4405 - loss: 2.8842 - val_accuracy: 0.4503 - val_loss: 2.8787 Epoch 28/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:04 6s/step - accuracy: 0.3750 - loss: 2.8840 Epoch 28: val_accuracy did not improve from 0.50000 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 85ms/step - accuracy: 0.3750 - loss: 2.8840 - val_accuracy: 0.3824 - val_loss: 2.8780 Epoch 29/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4500 - loss: 2.8754 Epoch 29: val_accuracy did not improve from 0.50000 40/40 ━━━━━━━━━━━━━━━━━━━━ 335s 8s/step - accuracy: 0.4500 - loss: 2.8754 - val_accuracy: 0.4531 - val_loss: 2.8710 Epoch 30/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:05 6s/step - accuracy: 0.4062 - loss: 2.8761 Epoch 30: val_accuracy did not improve from 0.50000 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 95ms/step - accuracy: 0.4062 - loss: 2.8761 - val_accuracy: 0.5000 - val_loss: 2.8717 Epoch 31/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.4720 - loss: 2.8684 Epoch 31: val_accuracy did not improve from 0.50000 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.4716 - loss: 2.8683 - val_accuracy: 0.4659 - val_loss: 2.8644 Epoch 32/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:56 6s/step - accuracy: 0.5469 - loss: 2.8574 Epoch 32: val_accuracy did not improve from 0.50000 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 91ms/step - accuracy: 0.5469 - loss: 2.8574 - val_accuracy: 0.4706 - val_loss: 2.8643 Epoch 33/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.4668 - loss: 2.8623 Epoch 33: val_accuracy did not improve from 0.50000 40/40 ━━━━━━━━━━━━━━━━━━━━ 340s 8s/step - accuracy: 0.4669 - loss: 2.8623 - val_accuracy: 0.4986 - val_loss: 2.8580 Epoch 34/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:00 6s/step - accuracy: 0.4844 - loss: 2.8568 Epoch 34: val_accuracy did not improve from 0.50000 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 89ms/step - accuracy: 0.4844 - loss: 2.8568 - val_accuracy: 0.4412 - val_loss: 2.8559 Epoch 35/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4727 - loss: 2.8555 Epoch 35: val_accuracy did not improve from 0.50000 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.4727 - loss: 2.8555 - val_accuracy: 0.4787 - val_loss: 2.8529 Epoch 36/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:23 7s/step - accuracy: 0.4688 - loss: 2.8436 Epoch 36: val_accuracy did not improve from 0.50000 40/40 ━━━━━━━━━━━━━━━━━━━━ 11s 99ms/step - accuracy: 0.4688 - loss: 2.8436 - val_accuracy: 0.3824 - val_loss: 2.8607 Epoch 36: early stopping Restoring model weights from the end of the best epoch: 26. Tiempo transcurrido para el entrenamiento: 6330.446506977081 segundos Uso de CPU durante el entrenamiento: 40.89999999999999% Aumento en uso de memoria: 0.4019775390625 GB Resultados para lr=0.0001, l2=0.05, batch_size=64: Tiempo: 6330.446506977081 segundos, CPU: 40.89999999999999%, Memoria: 0.4019775390625 GB Precisión de validación: 0.5 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_16"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_8 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.0853 - loss: 3.1943 Epoch 1: val_accuracy improved from -inf to 0.16984, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 355s 2s/step - accuracy: 0.0855 - loss: 3.1940 - val_accuracy: 0.1698 - val_loss: 3.0692 Epoch 2/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:35 2s/step - accuracy: 0.2500 - loss: 3.0561 Epoch 2: val_accuracy improved from 0.16984 to 0.50000, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 5s 22ms/step - accuracy: 0.2500 - loss: 3.0561 - val_accuracy: 0.5000 - val_loss: 3.0764 Epoch 3/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.2061 - loss: 3.0397 Epoch 3: val_accuracy did not improve from 0.50000 163/163 ━━━━━━━━━━━━━━━━━━━━ 359s 2s/step - accuracy: 0.2062 - loss: 3.0395 - val_accuracy: 0.2473 - val_loss: 2.9681 Epoch 4/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:48 2s/step - accuracy: 0.1875 - loss: 2.9743 Epoch 4: val_accuracy did not improve from 0.50000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.1875 - loss: 2.9743 - val_accuracy: 0.0000e+00 - val_loss: 2.9952 Epoch 5/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.2686 - loss: 2.9525 Epoch 5: val_accuracy did not improve from 0.50000 163/163 ━━━━━━━━━━━━━━━━━━━━ 349s 2s/step - accuracy: 0.2686 - loss: 2.9524 - val_accuracy: 0.2772 - val_loss: 2.9127 Epoch 6/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.3750 - loss: 2.9119 Epoch 6: val_accuracy did not improve from 0.50000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.3750 - loss: 2.9119 - val_accuracy: 0.5000 - val_loss: 2.9348 Epoch 7/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.3174 - loss: 2.9038 Epoch 7: val_accuracy did not improve from 0.50000 163/163 ━━━━━━━━━━━━━━━━━━━━ 352s 2s/step - accuracy: 0.3174 - loss: 2.9038 - val_accuracy: 0.3329 - val_loss: 2.8813 Epoch 8/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:23 2s/step - accuracy: 0.5625 - loss: 2.8722 Epoch 8: val_accuracy improved from 0.50000 to 1.00000, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 4s 13ms/step - accuracy: 0.5625 - loss: 2.8722 - val_accuracy: 1.0000 - val_loss: 2.8368 Epoch 9/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.3585 - loss: 2.8760 Epoch 9: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 349s 2s/step - accuracy: 0.3586 - loss: 2.8760 - val_accuracy: 0.3804 - val_loss: 2.8621 Epoch 10/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:25 2s/step - accuracy: 0.3750 - loss: 2.8686 Epoch 10: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.3750 - loss: 2.8686 - val_accuracy: 1.0000 - val_loss: 2.8165 Epoch 11/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.3938 - loss: 2.8580 Epoch 11: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 349s 2s/step - accuracy: 0.3939 - loss: 2.8580 - val_accuracy: 0.4348 - val_loss: 2.8503 Epoch 12/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.3750 - loss: 2.8423 Epoch 12: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.3750 - loss: 2.8423 - val_accuracy: 0.0000e+00 - val_loss: 2.8909 Epoch 13/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.4700 - loss: 2.8483 Epoch 13: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 348s 2s/step - accuracy: 0.4700 - loss: 2.8483 - val_accuracy: 0.4701 - val_loss: 2.8421 Epoch 14/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:15 2s/step - accuracy: 0.5000 - loss: 2.8387 Epoch 14: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.5000 - loss: 2.8387 - val_accuracy: 0.5000 - val_loss: 2.8016 Epoch 15/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.4747 - loss: 2.8408 Epoch 15: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 351s 2s/step - accuracy: 0.4748 - loss: 2.8408 - val_accuracy: 0.4783 - val_loss: 2.8360 Epoch 16/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.3750 - loss: 2.8509 Epoch 16: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.3750 - loss: 2.8509 - val_accuracy: 0.5000 - val_loss: 2.8490 Epoch 17/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.4948 - loss: 2.8336 Epoch 17: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 361s 2s/step - accuracy: 0.4948 - loss: 2.8336 - val_accuracy: 0.4973 - val_loss: 2.8295 Epoch 18/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.3750 - loss: 2.8351 Epoch 18: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.3750 - loss: 2.8351 - val_accuracy: 0.0000e+00 - val_loss: 2.8519 Epoch 18: early stopping Restoring model weights from the end of the best epoch: 8. Tiempo transcurrido para el entrenamiento: 3196.9851167201996 segundos Uso de CPU durante el entrenamiento: 41.3% Aumento en uso de memoria: -0.6323471069335938 GB Resultados para lr=0.0001, l2=0.1, batch_size=16: Tiempo: 3196.9851167201996 segundos, CPU: 41.3%, Memoria: -0.6323471069335938 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_18"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_9 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.0545 - loss: 3.2192 Epoch 1: val_accuracy improved from -inf to 0.07201, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 349s 4s/step - accuracy: 0.0545 - loss: 3.2189 - val_accuracy: 0.0720 - val_loss: 3.1452 Epoch 2/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:08 3s/step - accuracy: 0.1250 - loss: 3.1491 Epoch 2: val_accuracy improved from 0.07201 to 0.50000, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 6s 31ms/step - accuracy: 0.1250 - loss: 3.1491 - val_accuracy: 0.5000 - val_loss: 3.1301 Epoch 3/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.0782 - loss: 3.1235 Epoch 3: val_accuracy did not improve from 0.50000 81/81 ━━━━━━━━━━━━━━━━━━━━ 342s 4s/step - accuracy: 0.0785 - loss: 3.1233 - val_accuracy: 0.1168 - val_loss: 3.0672 Epoch 4/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:06 3s/step - accuracy: 0.0938 - loss: 3.0692 Epoch 4: val_accuracy did not improve from 0.50000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.0938 - loss: 3.0692 - val_accuracy: 0.0000e+00 - val_loss: 3.0670 Epoch 5/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.1578 - loss: 3.0509 Epoch 5: val_accuracy did not improve from 0.50000 81/81 ━━━━━━━━━━━━━━━━━━━━ 343s 4s/step - accuracy: 0.1580 - loss: 3.0507 - val_accuracy: 0.2065 - val_loss: 3.0084 Epoch 6/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:18 3s/step - accuracy: 0.1875 - loss: 3.0112 Epoch 6: val_accuracy did not improve from 0.50000 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - accuracy: 0.1875 - loss: 3.0112 - val_accuracy: 0.0000e+00 - val_loss: 3.0557 Epoch 7/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.2360 - loss: 2.9954 Epoch 7: val_accuracy did not improve from 0.50000 81/81 ━━━━━━━━━━━━━━━━━━━━ 341s 4s/step - accuracy: 0.2361 - loss: 2.9953 - val_accuracy: 0.2582 - val_loss: 2.9648 Epoch 8/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:02 3s/step - accuracy: 0.3750 - loss: 2.9655 Epoch 8: val_accuracy did not improve from 0.50000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.3750 - loss: 2.9655 - val_accuracy: 0.0000e+00 - val_loss: 2.9459 Epoch 9/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.2611 - loss: 2.9551 Epoch 9: val_accuracy did not improve from 0.50000 81/81 ━━━━━━━━━━━━━━━━━━━━ 345s 4s/step - accuracy: 0.2614 - loss: 2.9550 - val_accuracy: 0.2880 - val_loss: 2.9307 Epoch 10/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:09 3s/step - accuracy: 0.3125 - loss: 2.9365 Epoch 10: val_accuracy did not improve from 0.50000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.3125 - loss: 2.9365 - val_accuracy: 0.0000e+00 - val_loss: 2.8752 Epoch 11/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.3115 - loss: 2.9233 Epoch 11: val_accuracy did not improve from 0.50000 81/81 ━━━━━━━━━━━━━━━━━━━━ 350s 4s/step - accuracy: 0.3115 - loss: 2.9232 - val_accuracy: 0.3152 - val_loss: 2.9061 Epoch 12/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:04 3s/step - accuracy: 0.4062 - loss: 2.9018 Epoch 12: val_accuracy did not improve from 0.50000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.4062 - loss: 2.9018 - val_accuracy: 0.5000 - val_loss: 2.8846 Epoch 12: early stopping Restoring model weights from the end of the best epoch: 2. Tiempo transcurrido para el entrenamiento: 2093.3566970825195 segundos Uso de CPU durante el entrenamiento: 41.4% Aumento en uso de memoria: -0.2600898742675781 GB Resultados para lr=0.0001, l2=0.1, batch_size=32: Tiempo: 2093.3566970825195 segundos, CPU: 41.4%, Memoria: -0.2600898742675781 GB Precisión de validación: 0.5 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_20"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_10 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.0383 - loss: 3.2350 Epoch 1: val_accuracy improved from -inf to 0.05682, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 349s 8s/step - accuracy: 0.0384 - loss: 3.2346 - val_accuracy: 0.0568 - val_loss: 3.1941 Epoch 2/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:56 6s/step - accuracy: 0.0469 - loss: 3.1963 Epoch 2: val_accuracy did not improve from 0.05682 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 91ms/step - accuracy: 0.0469 - loss: 3.1963 - val_accuracy: 0.0294 - val_loss: 3.1926 Epoch 3/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.0603 - loss: 3.1807 Epoch 3: val_accuracy improved from 0.05682 to 0.08381, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.0605 - loss: 3.1804 - val_accuracy: 0.0838 - val_loss: 3.1449 Epoch 4/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:59 6s/step - accuracy: 0.0781 - loss: 3.1474 Epoch 4: val_accuracy did not improve from 0.08381 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 89ms/step - accuracy: 0.0781 - loss: 3.1474 - val_accuracy: 0.0588 - val_loss: 3.1452 Epoch 5/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.0955 - loss: 3.1337 Epoch 5: val_accuracy improved from 0.08381 to 0.13778, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 341s 8s/step - accuracy: 0.0959 - loss: 3.1335 - val_accuracy: 0.1378 - val_loss: 3.1032 Epoch 6/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:01 6s/step - accuracy: 0.1094 - loss: 3.1056 Epoch 6: val_accuracy improved from 0.13778 to 0.29412, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 142ms/step - accuracy: 0.1094 - loss: 3.1056 - val_accuracy: 0.2941 - val_loss: 3.1019 Epoch 7/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.1770 - loss: 3.0917 Epoch 7: val_accuracy did not improve from 0.29412 40/40 ━━━━━━━━━━━━━━━━━━━━ 342s 8s/step - accuracy: 0.1767 - loss: 3.0915 - val_accuracy: 0.1705 - val_loss: 3.0664 Epoch 8/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:57 6s/step - accuracy: 0.1875 - loss: 3.0669 Epoch 8: val_accuracy did not improve from 0.29412 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 87ms/step - accuracy: 0.1875 - loss: 3.0669 - val_accuracy: 0.1471 - val_loss: 3.0635 Epoch 9/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.1764 - loss: 3.0575 Epoch 9: val_accuracy did not improve from 0.29412 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.1769 - loss: 3.0573 - val_accuracy: 0.2145 - val_loss: 3.0345 Epoch 10/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:59 6s/step - accuracy: 0.2188 - loss: 3.0314 Epoch 10: val_accuracy did not improve from 0.29412 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 86ms/step - accuracy: 0.2188 - loss: 3.0314 - val_accuracy: 0.2647 - val_loss: 3.0337 Epoch 11/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.2328 - loss: 3.0264 Epoch 11: val_accuracy did not improve from 0.29412 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.2330 - loss: 3.0262 - val_accuracy: 0.2429 - val_loss: 3.0069 Epoch 12/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:58 6s/step - accuracy: 0.1875 - loss: 3.0114 Epoch 12: val_accuracy did not improve from 0.29412 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 91ms/step - accuracy: 0.1875 - loss: 3.0114 - val_accuracy: 0.1765 - val_loss: 3.0064 Epoch 13/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.2490 - loss: 2.9998 Epoch 13: val_accuracy did not improve from 0.29412 40/40 ━━━━━━━━━━━━━━━━━━━━ 344s 9s/step - accuracy: 0.2492 - loss: 2.9997 - val_accuracy: 0.2557 - val_loss: 2.9832 Epoch 14/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:00 6s/step - accuracy: 0.1719 - loss: 2.9882 Epoch 14: val_accuracy improved from 0.29412 to 0.35294, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 150ms/step - accuracy: 0.1719 - loss: 2.9882 - val_accuracy: 0.3529 - val_loss: 2.9832 Epoch 15/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.2663 - loss: 2.9766 Epoch 15: val_accuracy did not improve from 0.35294 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.2667 - loss: 2.9765 - val_accuracy: 0.2628 - val_loss: 2.9619 Epoch 16/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:59 6s/step - accuracy: 0.2812 - loss: 2.9575 Epoch 16: val_accuracy did not improve from 0.35294 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 89ms/step - accuracy: 0.2812 - loss: 2.9575 - val_accuracy: 0.3235 - val_loss: 2.9610 Epoch 17/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.2868 - loss: 2.9573 Epoch 17: val_accuracy did not improve from 0.35294 40/40 ━━━━━━━━━━━━━━━━━━━━ 339s 8s/step - accuracy: 0.2872 - loss: 2.9572 - val_accuracy: 0.3168 - val_loss: 2.9440 Epoch 18/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:56 6s/step - accuracy: 0.4062 - loss: 2.9373 Epoch 18: val_accuracy did not improve from 0.35294 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 85ms/step - accuracy: 0.4062 - loss: 2.9373 - val_accuracy: 0.2941 - val_loss: 2.9432 Epoch 19/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.3289 - loss: 2.9390 Epoch 19: val_accuracy did not improve from 0.35294 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.3291 - loss: 2.9389 - val_accuracy: 0.3395 - val_loss: 2.9284 Epoch 20/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:48 7s/step - accuracy: 0.3125 - loss: 2.9247 Epoch 20: val_accuracy did not improve from 0.35294 40/40 ━━━━━━━━━━━━━━━━━━━━ 11s 86ms/step - accuracy: 0.3125 - loss: 2.9247 - val_accuracy: 0.1765 - val_loss: 2.9330 Epoch 21/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.3539 - loss: 2.9242 Epoch 21: val_accuracy improved from 0.35294 to 0.36080, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 347s 9s/step - accuracy: 0.3540 - loss: 2.9241 - val_accuracy: 0.3608 - val_loss: 2.9149 Epoch 22/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:55 8s/step - accuracy: 0.3438 - loss: 2.9140 Epoch 22: val_accuracy improved from 0.36080 to 0.44118, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 16s 218ms/step - accuracy: 0.3438 - loss: 2.9140 - val_accuracy: 0.4412 - val_loss: 2.9103 Epoch 23/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.3800 - loss: 2.9108 Epoch 23: val_accuracy did not improve from 0.44118 40/40 ━━━━━━━━━━━━━━━━━━━━ 345s 8s/step - accuracy: 0.3801 - loss: 2.9107 - val_accuracy: 0.3764 - val_loss: 2.9036 Epoch 24/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:08 6s/step - accuracy: 0.3438 - loss: 2.9003 Epoch 24: val_accuracy improved from 0.44118 to 0.47059, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 154ms/step - accuracy: 0.3438 - loss: 2.9003 - val_accuracy: 0.4706 - val_loss: 2.9024 Epoch 25/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.3783 - loss: 2.8986 Epoch 25: val_accuracy did not improve from 0.47059 40/40 ━━━━━━━━━━━━━━━━━━━━ 341s 8s/step - accuracy: 0.3786 - loss: 2.8986 - val_accuracy: 0.3949 - val_loss: 2.8936 Epoch 26/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:12 6s/step - accuracy: 0.3906 - loss: 2.8955 Epoch 26: val_accuracy did not improve from 0.47059 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 85ms/step - accuracy: 0.3906 - loss: 2.8955 - val_accuracy: 0.4412 - val_loss: 2.8852 Epoch 27/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.3969 - loss: 2.8893 Epoch 27: val_accuracy did not improve from 0.47059 40/40 ━━━━━━━━━━━━━━━━━━━━ 348s 9s/step - accuracy: 0.3974 - loss: 2.8892 - val_accuracy: 0.4176 - val_loss: 2.8838 Epoch 28/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:01 6s/step - accuracy: 0.3438 - loss: 2.8876 Epoch 28: val_accuracy did not improve from 0.47059 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 85ms/step - accuracy: 0.3438 - loss: 2.8876 - val_accuracy: 0.4706 - val_loss: 2.8887 Epoch 29/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4356 - loss: 2.8802 Epoch 29: val_accuracy did not improve from 0.47059 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.4354 - loss: 2.8802 - val_accuracy: 0.4219 - val_loss: 2.8756 Epoch 30/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:12 6s/step - accuracy: 0.3906 - loss: 2.8805 Epoch 30: val_accuracy did not improve from 0.47059 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 89ms/step - accuracy: 0.3906 - loss: 2.8805 - val_accuracy: 0.4412 - val_loss: 2.8788 Epoch 31/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4370 - loss: 2.8744 Epoch 31: val_accuracy did not improve from 0.47059 40/40 ━━━━━━━━━━━━━━━━━━━━ 335s 8s/step - accuracy: 0.4370 - loss: 2.8744 - val_accuracy: 0.4176 - val_loss: 2.8696 Epoch 32/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:58 6s/step - accuracy: 0.5156 - loss: 2.8642 Epoch 32: val_accuracy did not improve from 0.47059 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 89ms/step - accuracy: 0.5156 - loss: 2.8642 - val_accuracy: 0.4706 - val_loss: 2.8600 Epoch 33/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4536 - loss: 2.8662 Epoch 33: val_accuracy did not improve from 0.47059 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.4534 - loss: 2.8662 - val_accuracy: 0.4460 - val_loss: 2.8640 Epoch 34/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:00 6s/step - accuracy: 0.5469 - loss: 2.8608 Epoch 34: val_accuracy improved from 0.47059 to 0.52941, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 139ms/step - accuracy: 0.5469 - loss: 2.8608 - val_accuracy: 0.5294 - val_loss: 2.8583 Epoch 35/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4732 - loss: 2.8603 Epoch 35: val_accuracy did not improve from 0.52941 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.4729 - loss: 2.8603 - val_accuracy: 0.4474 - val_loss: 2.8579 Epoch 36/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:10 6s/step - accuracy: 0.4219 - loss: 2.8595 Epoch 36: val_accuracy did not improve from 0.52941 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 86ms/step - accuracy: 0.4219 - loss: 2.8595 - val_accuracy: 0.4706 - val_loss: 2.8658 Epoch 37/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4609 - loss: 2.8542 Epoch 37: val_accuracy did not improve from 0.52941 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.4609 - loss: 2.8542 - val_accuracy: 0.4418 - val_loss: 2.8532 Epoch 38/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:01 6s/step - accuracy: 0.5156 - loss: 2.8548 Epoch 38: val_accuracy did not improve from 0.52941 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 94ms/step - accuracy: 0.5156 - loss: 2.8548 - val_accuracy: 0.4706 - val_loss: 2.8553 Epoch 39/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.4852 - loss: 2.8504 Epoch 39: val_accuracy did not improve from 0.52941 40/40 ━━━━━━━━━━━━━━━━━━━━ 343s 8s/step - accuracy: 0.4849 - loss: 2.8503 - val_accuracy: 0.4602 - val_loss: 2.8487 Epoch 40/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:55 6s/step - accuracy: 0.3750 - loss: 2.8548 Epoch 40: val_accuracy did not improve from 0.52941 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 95ms/step - accuracy: 0.3750 - loss: 2.8548 - val_accuracy: 0.3824 - val_loss: 2.8422 Epoch 41/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.4663 - loss: 2.8471 Epoch 41: val_accuracy did not improve from 0.52941 40/40 ━━━━━━━━━━━━━━━━━━━━ 340s 8s/step - accuracy: 0.4667 - loss: 2.8471 - val_accuracy: 0.4702 - val_loss: 2.8452 Epoch 42/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:01 6s/step - accuracy: 0.5625 - loss: 2.8385 Epoch 42: val_accuracy improved from 0.52941 to 0.61765, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 152ms/step - accuracy: 0.5625 - loss: 2.8385 - val_accuracy: 0.6176 - val_loss: 2.8378 Epoch 43/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4842 - loss: 2.8435 Epoch 43: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.4843 - loss: 2.8435 - val_accuracy: 0.4716 - val_loss: 2.8423 Epoch 44/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:58 6s/step - accuracy: 0.4844 - loss: 2.8379 Epoch 44: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 106ms/step - accuracy: 0.4844 - loss: 2.8379 - val_accuracy: 0.4412 - val_loss: 2.8418 Epoch 45/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4913 - loss: 2.8397 Epoch 45: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 335s 8s/step - accuracy: 0.4913 - loss: 2.8396 - val_accuracy: 0.4872 - val_loss: 2.8394 Epoch 46/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:29 7s/step - accuracy: 0.5312 - loss: 2.8342 Epoch 46: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 91ms/step - accuracy: 0.5312 - loss: 2.8342 - val_accuracy: 0.4412 - val_loss: 2.8315 Epoch 47/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5163 - loss: 2.8347 Epoch 47: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 339s 8s/step - accuracy: 0.5162 - loss: 2.8347 - val_accuracy: 0.4943 - val_loss: 2.8352 Epoch 48/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:58 6s/step - accuracy: 0.5000 - loss: 2.8295 Epoch 48: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 91ms/step - accuracy: 0.5000 - loss: 2.8295 - val_accuracy: 0.5294 - val_loss: 2.8346 Epoch 49/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.4969 - loss: 2.8336 Epoch 49: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 340s 8s/step - accuracy: 0.4971 - loss: 2.8335 - val_accuracy: 0.4972 - val_loss: 2.8331 Epoch 50/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:57 6s/step - accuracy: 0.5625 - loss: 2.8222 Epoch 50: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 85ms/step - accuracy: 0.5625 - loss: 2.8222 - val_accuracy: 0.4118 - val_loss: 2.8404 Restoring model weights from the end of the best epoch: 42. Tiempo transcurrido para el entrenamiento: 8758.206644535065 segundos Uso de CPU durante el entrenamiento: 44.7% Aumento en uso de memoria: 0.4933815002441406 GB Resultados para lr=0.0001, l2=0.1, batch_size=64: Tiempo: 8758.206644535065 segundos, CPU: 44.7%, Memoria: 0.4933815002441406 GB Precisión de validación: 0.6176470518112183 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_22"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_11 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.2015 - loss: 3.0636 Epoch 1: val_accuracy improved from -inf to 0.42663, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 360s 2s/step - accuracy: 0.2020 - loss: 3.0630 - val_accuracy: 0.4266 - val_loss: 2.8548 Epoch 2/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 5:51 2s/step - accuracy: 0.4375 - loss: 2.8509 Epoch 2: val_accuracy improved from 0.42663 to 1.00000, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 5s 18ms/step - accuracy: 0.4375 - loss: 2.8509 - val_accuracy: 1.0000 - val_loss: 2.8538 Epoch 3/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.4730 - loss: 2.8419 Epoch 3: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 358s 2s/step - accuracy: 0.4730 - loss: 2.8419 - val_accuracy: 0.5245 - val_loss: 2.8181 Epoch 4/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.5625 - loss: 2.8304 Epoch 4: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.5625 - loss: 2.8304 - val_accuracy: 0.5000 - val_loss: 2.7663 Epoch 5/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5449 - loss: 2.8118 Epoch 5: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 351s 2s/step - accuracy: 0.5449 - loss: 2.8118 - val_accuracy: 0.5299 - val_loss: 2.7961 Epoch 6/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:13 2s/step - accuracy: 0.5625 - loss: 2.7999 Epoch 6: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.5625 - loss: 2.7999 - val_accuracy: 0.5000 - val_loss: 2.8246 Epoch 7/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5746 - loss: 2.7888 Epoch 7: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 349s 2s/step - accuracy: 0.5746 - loss: 2.7888 - val_accuracy: 0.5408 - val_loss: 2.7788 Epoch 8/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:13 2s/step - accuracy: 0.5625 - loss: 2.7803 Epoch 8: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.5625 - loss: 2.7803 - val_accuracy: 0.5000 - val_loss: 2.8608 Epoch 9/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5752 - loss: 2.7687 Epoch 9: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 349s 2s/step - accuracy: 0.5752 - loss: 2.7687 - val_accuracy: 0.5747 - val_loss: 2.7654 Epoch 10/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.7500 - loss: 2.7698 Epoch 10: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.7500 - loss: 2.7698 - val_accuracy: 1.0000 - val_loss: 2.7166 Epoch 11/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5964 - loss: 2.7587 Epoch 11: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 351s 2s/step - accuracy: 0.5964 - loss: 2.7586 - val_accuracy: 0.5611 - val_loss: 2.7552 Epoch 12/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.5625 - loss: 2.7537 Epoch 12: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.5625 - loss: 2.7537 - val_accuracy: 0.0000e+00 - val_loss: 2.8822 Epoch 12: early stopping Restoring model weights from the end of the best epoch: 2. Tiempo transcurrido para el entrenamiento: 2132.980417728424 segundos Uso de CPU durante el entrenamiento: 37.8% Aumento en uso de memoria: -1.5767440795898438 GB Resultados para lr=0.0005, l2=0.01, batch_size=16: Tiempo: 2132.980417728424 segundos, CPU: 37.8%, Memoria: -1.5767440795898438 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_24"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_12 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.1225 - loss: 3.1285 Epoch 1: val_accuracy improved from -inf to 0.35734, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 349s 4s/step - accuracy: 0.1237 - loss: 3.1275 - val_accuracy: 0.3573 - val_loss: 2.9184 Epoch 2/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:09 3s/step - accuracy: 0.3750 - loss: 2.9194 Epoch 2: val_accuracy did not improve from 0.35734 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.3750 - loss: 2.9194 - val_accuracy: 0.0000e+00 - val_loss: 2.9299 Epoch 3/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.4026 - loss: 2.8914 Epoch 3: val_accuracy improved from 0.35734 to 0.47283, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 346s 4s/step - accuracy: 0.4028 - loss: 2.8912 - val_accuracy: 0.4728 - val_loss: 2.8442 Epoch 4/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 5:21 4s/step - accuracy: 0.5000 - loss: 2.8305 Epoch 4: val_accuracy improved from 0.47283 to 0.50000, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 6s 28ms/step - accuracy: 0.5000 - loss: 2.8305 - val_accuracy: 0.5000 - val_loss: 2.8547 Epoch 5/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.4857 - loss: 2.8375 Epoch 5: val_accuracy improved from 0.50000 to 0.52717, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 350s 4s/step - accuracy: 0.4858 - loss: 2.8374 - val_accuracy: 0.5272 - val_loss: 2.8199 Epoch 6/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 3:59 3s/step - accuracy: 0.5938 - loss: 2.7965 Epoch 6: val_accuracy did not improve from 0.52717 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.5938 - loss: 2.7965 - val_accuracy: 0.5000 - val_loss: 2.8333 Epoch 7/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5424 - loss: 2.8162 Epoch 7: val_accuracy improved from 0.52717 to 0.54348, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 349s 4s/step - accuracy: 0.5422 - loss: 2.8162 - val_accuracy: 0.5435 - val_loss: 2.8047 Epoch 8/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:01 3s/step - accuracy: 0.6562 - loss: 2.8174 Epoch 8: val_accuracy did not improve from 0.54348 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.6562 - loss: 2.8174 - val_accuracy: 0.0000e+00 - val_loss: 2.8058 Epoch 9/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5433 - loss: 2.8015 Epoch 9: val_accuracy improved from 0.54348 to 0.57337, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 343s 4s/step - accuracy: 0.5434 - loss: 2.8015 - val_accuracy: 0.5734 - val_loss: 2.7906 Epoch 10/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:06 3s/step - accuracy: 0.5625 - loss: 2.7664 Epoch 10: val_accuracy did not improve from 0.57337 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.5625 - loss: 2.7664 - val_accuracy: 0.5000 - val_loss: 2.7853 Epoch 11/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5571 - loss: 2.7870 Epoch 11: val_accuracy improved from 0.57337 to 0.57609, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 345s 4s/step - accuracy: 0.5572 - loss: 2.7870 - val_accuracy: 0.5761 - val_loss: 2.7829 Epoch 12/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:03 3s/step - accuracy: 0.5312 - loss: 2.7984 Epoch 12: val_accuracy improved from 0.57609 to 1.00000, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 5s 30ms/step - accuracy: 0.5312 - loss: 2.7984 - val_accuracy: 1.0000 - val_loss: 2.6973 Epoch 13/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5623 - loss: 2.7765 Epoch 13: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 342s 4s/step - accuracy: 0.5623 - loss: 2.7765 - val_accuracy: 0.5693 - val_loss: 2.7741 Epoch 14/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:04 3s/step - accuracy: 0.5312 - loss: 2.7777 Epoch 14: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 5ms/step - accuracy: 0.5312 - loss: 2.7777 - val_accuracy: 0.5000 - val_loss: 2.8106 Epoch 15/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5682 - loss: 2.7690 Epoch 15: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 343s 4s/step - accuracy: 0.5682 - loss: 2.7689 - val_accuracy: 0.5611 - val_loss: 2.7630 Epoch 16/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 5:14 4s/step - accuracy: 0.5625 - loss: 2.7736 Epoch 16: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - accuracy: 0.5625 - loss: 2.7736 - val_accuracy: 1.0000 - val_loss: 2.7360 Epoch 17/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5740 - loss: 2.7564 Epoch 17: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 349s 4s/step - accuracy: 0.5739 - loss: 2.7564 - val_accuracy: 0.5557 - val_loss: 2.7505 Epoch 18/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:03 3s/step - accuracy: 0.5625 - loss: 2.7368 Epoch 18: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.5625 - loss: 2.7368 - val_accuracy: 0.0000e+00 - val_loss: 2.6997 Epoch 19/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5686 - loss: 2.7498 Epoch 19: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 343s 4s/step - accuracy: 0.5686 - loss: 2.7498 - val_accuracy: 0.5666 - val_loss: 2.7487 Epoch 20/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:04 3s/step - accuracy: 0.6562 - loss: 2.7347 Epoch 20: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.6562 - loss: 2.7347 - val_accuracy: 0.5000 - val_loss: 2.7975 Epoch 21/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5831 - loss: 2.7454 Epoch 21: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 341s 4s/step - accuracy: 0.5830 - loss: 2.7454 - val_accuracy: 0.5421 - val_loss: 2.7407 Epoch 22/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:01 3s/step - accuracy: 0.5625 - loss: 2.7088 Epoch 22: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.5625 - loss: 2.7088 - val_accuracy: 0.5000 - val_loss: 2.8002 Epoch 22: early stopping Restoring model weights from the end of the best epoch: 12. Tiempo transcurrido para el entrenamiento: 3846.1872708797455 segundos Uso de CPU durante el entrenamiento: 44.00000000000001% Aumento en uso de memoria: -0.29264068603515625 GB Resultados para lr=0.0005, l2=0.01, batch_size=32: Tiempo: 3846.1872708797455 segundos, CPU: 44.00000000000001%, Memoria: -0.29264068603515625 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_26"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_13 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.0913 - loss: 3.1883 Epoch 1: val_accuracy improved from -inf to 0.19034, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 353s 9s/step - accuracy: 0.0921 - loss: 3.1870 - val_accuracy: 0.1903 - val_loss: 3.0344 Epoch 2/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:56 8s/step - accuracy: 0.1875 - loss: 3.0358 Epoch 2: val_accuracy improved from 0.19034 to 0.23529, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 14s 165ms/step - accuracy: 0.1875 - loss: 3.0358 - val_accuracy: 0.2353 - val_loss: 3.0330 Epoch 3/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.2273 - loss: 2.9999 Epoch 3: val_accuracy improved from 0.23529 to 0.29261, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 341s 8s/step - accuracy: 0.2279 - loss: 2.9993 - val_accuracy: 0.2926 - val_loss: 2.9268 Epoch 4/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:57 6s/step - accuracy: 0.4062 - loss: 2.9223 Epoch 4: val_accuracy did not improve from 0.29261 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 92ms/step - accuracy: 0.4062 - loss: 2.9223 - val_accuracy: 0.2647 - val_loss: 2.9212 Epoch 5/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.3170 - loss: 2.9104 Epoch 5: val_accuracy improved from 0.29261 to 0.39205, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 344s 9s/step - accuracy: 0.3177 - loss: 2.9101 - val_accuracy: 0.3920 - val_loss: 2.8765 Epoch 6/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:59 6s/step - accuracy: 0.2812 - loss: 2.8816 Epoch 6: val_accuracy improved from 0.39205 to 0.41176, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 13s 171ms/step - accuracy: 0.2812 - loss: 2.8816 - val_accuracy: 0.4118 - val_loss: 2.8821 Epoch 7/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4059 - loss: 2.8680 Epoch 7: val_accuracy improved from 0.41176 to 0.42472, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 346s 9s/step - accuracy: 0.4060 - loss: 2.8679 - val_accuracy: 0.4247 - val_loss: 2.8526 Epoch 8/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:57 6s/step - accuracy: 0.5312 - loss: 2.8458 Epoch 8: val_accuracy improved from 0.42472 to 0.47059, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 11s 136ms/step - accuracy: 0.5312 - loss: 2.8458 - val_accuracy: 0.4706 - val_loss: 2.8424 Epoch 9/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4369 - loss: 2.8461 Epoch 9: val_accuracy did not improve from 0.47059 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.4372 - loss: 2.8461 - val_accuracy: 0.4432 - val_loss: 2.8361 Epoch 10/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:10 6s/step - accuracy: 0.4375 - loss: 2.8359 Epoch 10: val_accuracy did not improve from 0.47059 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 87ms/step - accuracy: 0.4375 - loss: 2.8359 - val_accuracy: 0.4706 - val_loss: 2.8332 Epoch 11/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4317 - loss: 2.8323 Epoch 11: val_accuracy did not improve from 0.47059 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.4324 - loss: 2.8323 - val_accuracy: 0.4517 - val_loss: 2.8266 Epoch 12/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:17 7s/step - accuracy: 0.4531 - loss: 2.8205 Epoch 12: val_accuracy did not improve from 0.47059 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 89ms/step - accuracy: 0.4531 - loss: 2.8205 - val_accuracy: 0.4118 - val_loss: 2.8277 Epoch 13/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4823 - loss: 2.8222 Epoch 13: val_accuracy improved from 0.47059 to 0.48438, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.4822 - loss: 2.8222 - val_accuracy: 0.4844 - val_loss: 2.8176 Epoch 14/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:55 6s/step - accuracy: 0.5156 - loss: 2.8041 Epoch 14: val_accuracy improved from 0.48438 to 0.58824, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 11s 138ms/step - accuracy: 0.5156 - loss: 2.8041 - val_accuracy: 0.5882 - val_loss: 2.8088 Epoch 15/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5059 - loss: 2.8132 Epoch 15: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.5057 - loss: 2.8131 - val_accuracy: 0.4886 - val_loss: 2.8099 Epoch 16/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:12 6s/step - accuracy: 0.5625 - loss: 2.8102 Epoch 16: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 87ms/step - accuracy: 0.5625 - loss: 2.8102 - val_accuracy: 0.3235 - val_loss: 2.8336 Epoch 17/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4785 - loss: 2.8060 Epoch 17: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.4788 - loss: 2.8059 - val_accuracy: 0.4815 - val_loss: 2.8019 Epoch 18/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:56 6s/step - accuracy: 0.5469 - loss: 2.8090 Epoch 18: val_accuracy improved from 0.58824 to 0.61765, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 154ms/step - accuracy: 0.5469 - loss: 2.8090 - val_accuracy: 0.6176 - val_loss: 2.8123 Epoch 19/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5063 - loss: 2.7978 Epoch 19: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.5062 - loss: 2.7978 - val_accuracy: 0.4957 - val_loss: 2.7941 Epoch 20/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:58 6s/step - accuracy: 0.5469 - loss: 2.7938 Epoch 20: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 88ms/step - accuracy: 0.5469 - loss: 2.7938 - val_accuracy: 0.4412 - val_loss: 2.7993 Epoch 21/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.4973 - loss: 2.7926 Epoch 21: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 345s 9s/step - accuracy: 0.4973 - loss: 2.7925 - val_accuracy: 0.5014 - val_loss: 2.7900 Epoch 22/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:00 6s/step - accuracy: 0.5469 - loss: 2.7896 Epoch 22: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 99ms/step - accuracy: 0.5469 - loss: 2.7896 - val_accuracy: 0.4412 - val_loss: 2.8018 Epoch 23/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5162 - loss: 2.7847 Epoch 23: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 339s 8s/step - accuracy: 0.5162 - loss: 2.7846 - val_accuracy: 0.5156 - val_loss: 2.7841 Epoch 24/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:00 6s/step - accuracy: 0.6094 - loss: 2.7578 Epoch 24: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 88ms/step - accuracy: 0.6094 - loss: 2.7578 - val_accuracy: 0.5882 - val_loss: 2.8058 Epoch 25/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5063 - loss: 2.7786 Epoch 25: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.5064 - loss: 2.7785 - val_accuracy: 0.5156 - val_loss: 2.7765 Epoch 26/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:53 8s/step - accuracy: 0.5469 - loss: 2.7863 Epoch 26: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 11s 100ms/step - accuracy: 0.5469 - loss: 2.7863 - val_accuracy: 0.4412 - val_loss: 2.7992 Epoch 27/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5200 - loss: 2.7737 Epoch 27: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 339s 8s/step - accuracy: 0.5200 - loss: 2.7736 - val_accuracy: 0.4929 - val_loss: 2.7722 Epoch 28/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:05 6s/step - accuracy: 0.4844 - loss: 2.7669 Epoch 28: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 88ms/step - accuracy: 0.4844 - loss: 2.7669 - val_accuracy: 0.5294 - val_loss: 2.7917 Epoch 28: early stopping Restoring model weights from the end of the best epoch: 18. Tiempo transcurrido para el entrenamiento: 4920.667220592499 segundos Uso de CPU durante el entrenamiento: 32.10000000000001% Aumento en uso de memoria: 0.7705421447753906 GB Resultados para lr=0.0005, l2=0.01, batch_size=64: Tiempo: 4920.667220592499 segundos, CPU: 32.10000000000001%, Memoria: 0.7705421447753906 GB Precisión de validación: 0.6176470518112183 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_28"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_14 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.1560 - loss: 3.0733 Epoch 1: val_accuracy improved from -inf to 0.40217, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 360s 2s/step - accuracy: 0.1565 - loss: 3.0727 - val_accuracy: 0.4022 - val_loss: 2.8613 Epoch 2/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:58 2s/step - accuracy: 0.4375 - loss: 2.8690 Epoch 2: val_accuracy improved from 0.40217 to 1.00000, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 5s 20ms/step - accuracy: 0.4375 - loss: 2.8690 - val_accuracy: 1.0000 - val_loss: 2.8305 Epoch 3/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.4571 - loss: 2.8489 Epoch 3: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 355s 2s/step - accuracy: 0.4573 - loss: 2.8489 - val_accuracy: 0.5326 - val_loss: 2.8251 Epoch 4/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:33 2s/step - accuracy: 0.6250 - loss: 2.8501 Epoch 4: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.6250 - loss: 2.8501 - val_accuracy: 0.5000 - val_loss: 2.8070 Epoch 5/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5417 - loss: 2.8167 Epoch 5: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 365s 2s/step - accuracy: 0.5419 - loss: 2.8166 - val_accuracy: 0.5584 - val_loss: 2.8020 Epoch 6/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:20 2s/step - accuracy: 0.5000 - loss: 2.7923 Epoch 6: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.5000 - loss: 2.7923 - val_accuracy: 1.0000 - val_loss: 2.8346 Epoch 7/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5843 - loss: 2.7952 Epoch 7: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 350s 2s/step - accuracy: 0.5843 - loss: 2.7952 - val_accuracy: 0.5611 - val_loss: 2.7875 Epoch 8/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.7500 - loss: 2.7362 Epoch 8: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.7500 - loss: 2.7362 - val_accuracy: 1.0000 - val_loss: 2.7071 Epoch 9/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5714 - loss: 2.7773 Epoch 9: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 349s 2s/step - accuracy: 0.5715 - loss: 2.7773 - val_accuracy: 0.5910 - val_loss: 2.7708 Epoch 10/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.5000 - loss: 2.7910 Epoch 10: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.5000 - loss: 2.7910 - val_accuracy: 0.5000 - val_loss: 2.8868 Epoch 11/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5866 - loss: 2.7607 Epoch 11: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 352s 2s/step - accuracy: 0.5865 - loss: 2.7607 - val_accuracy: 0.5611 - val_loss: 2.7561 Epoch 12/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 5:28 2s/step - accuracy: 0.5625 - loss: 2.8015 Epoch 12: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.5625 - loss: 2.8015 - val_accuracy: 0.0000e+00 - val_loss: 2.7754 Epoch 12: early stopping Restoring model weights from the end of the best epoch: 2. Tiempo transcurrido para el entrenamiento: 2145.9741504192352 segundos Uso de CPU durante el entrenamiento: 58.9% Aumento en uso de memoria: -1.9600105285644531 GB Resultados para lr=0.0005, l2=0.05, batch_size=16: Tiempo: 2145.9741504192352 segundos, CPU: 58.9%, Memoria: -1.9600105285644531 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_30"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_15 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.1232 - loss: 3.1322 Epoch 1: val_accuracy improved from -inf to 0.30707, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 353s 4s/step - accuracy: 0.1240 - loss: 3.1312 - val_accuracy: 0.3071 - val_loss: 2.9228 Epoch 2/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:06 3s/step - accuracy: 0.4062 - loss: 2.9174 Epoch 2: val_accuracy did not improve from 0.30707 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.4062 - loss: 2.9174 - val_accuracy: 0.0000e+00 - val_loss: 2.9656 Epoch 3/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.3718 - loss: 2.8941 Epoch 3: val_accuracy improved from 0.30707 to 0.41304, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 347s 4s/step - accuracy: 0.3720 - loss: 2.8939 - val_accuracy: 0.4130 - val_loss: 2.8485 Epoch 4/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:13 3s/step - accuracy: 0.4375 - loss: 2.8420 Epoch 4: val_accuracy did not improve from 0.41304 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - accuracy: 0.4375 - loss: 2.8420 - val_accuracy: 0.0000e+00 - val_loss: 2.8923 Epoch 5/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.4353 - loss: 2.8410 Epoch 5: val_accuracy improved from 0.41304 to 0.48505, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 344s 4s/step - accuracy: 0.4356 - loss: 2.8410 - val_accuracy: 0.4851 - val_loss: 2.8258 Epoch 6/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:07 3s/step - accuracy: 0.4062 - loss: 2.8252 Epoch 6: val_accuracy did not improve from 0.48505 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.4062 - loss: 2.8252 - val_accuracy: 0.0000e+00 - val_loss: 2.8099 Epoch 7/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5247 - loss: 2.8197 Epoch 7: val_accuracy improved from 0.48505 to 0.50272, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 357s 4s/step - accuracy: 0.5248 - loss: 2.8196 - val_accuracy: 0.5027 - val_loss: 2.8085 Epoch 8/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:06 3s/step - accuracy: 0.7188 - loss: 2.8131 Epoch 8: val_accuracy did not improve from 0.50272 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.7188 - loss: 2.8131 - val_accuracy: 0.5000 - val_loss: 2.8012 Epoch 9/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5326 - loss: 2.8040 Epoch 9: val_accuracy improved from 0.50272 to 0.51766, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 344s 4s/step - accuracy: 0.5329 - loss: 2.8039 - val_accuracy: 0.5177 - val_loss: 2.7982 Epoch 10/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:03 3s/step - accuracy: 0.6562 - loss: 2.7964 Epoch 10: val_accuracy improved from 0.51766 to 1.00000, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 5s 27ms/step - accuracy: 0.6562 - loss: 2.7964 - val_accuracy: 1.0000 - val_loss: 2.7540 Epoch 11/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5706 - loss: 2.7896 Epoch 11: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 342s 4s/step - accuracy: 0.5705 - loss: 2.7896 - val_accuracy: 0.5448 - val_loss: 2.7854 Epoch 12/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:12 3s/step - accuracy: 0.6875 - loss: 2.7879 Epoch 12: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.6875 - loss: 2.7879 - val_accuracy: 0.0000e+00 - val_loss: 2.7511 Epoch 13/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5848 - loss: 2.7782 Epoch 13: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 341s 4s/step - accuracy: 0.5847 - loss: 2.7782 - val_accuracy: 0.5639 - val_loss: 2.7797 Epoch 14/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:07 3s/step - accuracy: 0.6562 - loss: 2.7627 Epoch 14: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.6562 - loss: 2.7627 - val_accuracy: 0.5000 - val_loss: 2.7438 Epoch 15/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.6072 - loss: 2.7692 Epoch 15: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 342s 4s/step - accuracy: 0.6070 - loss: 2.7692 - val_accuracy: 0.5774 - val_loss: 2.7682 Epoch 16/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:29 3s/step - accuracy: 0.6250 - loss: 2.7734 Epoch 16: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - accuracy: 0.6250 - loss: 2.7734 - val_accuracy: 0.0000e+00 - val_loss: 2.7860 Epoch 17/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5975 - loss: 2.7622 Epoch 17: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 345s 4s/step - accuracy: 0.5975 - loss: 2.7622 - val_accuracy: 0.5815 - val_loss: 2.7582 Epoch 18/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:07 3s/step - accuracy: 0.5938 - loss: 2.7760 Epoch 18: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.5938 - loss: 2.7760 - val_accuracy: 1.0000 - val_loss: 2.6446 Epoch 19/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.6032 - loss: 2.7521 Epoch 19: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 342s 4s/step - accuracy: 0.6031 - loss: 2.7521 - val_accuracy: 0.5870 - val_loss: 2.7540 Epoch 20/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:03 3s/step - accuracy: 0.5938 - loss: 2.7687 Epoch 20: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.5938 - loss: 2.7687 - val_accuracy: 0.5000 - val_loss: 2.7359 Epoch 20: early stopping Restoring model weights from the end of the best epoch: 10. Tiempo transcurrido para el entrenamiento: 3494.3435051441193 segundos Uso de CPU durante el entrenamiento: 43.3% Aumento en uso de memoria: 0.5475082397460938 GB Resultados para lr=0.0005, l2=0.05, batch_size=32: Tiempo: 3494.3435051441193 segundos, CPU: 43.3%, Memoria: 0.5475082397460938 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_32"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_16 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.0984 - loss: 3.1908 Epoch 1: val_accuracy improved from -inf to 0.18466, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 353s 9s/step - accuracy: 0.0990 - loss: 3.1895 - val_accuracy: 0.1847 - val_loss: 3.0370 Epoch 2/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:04 6s/step - accuracy: 0.1562 - loss: 3.0350 Epoch 2: val_accuracy improved from 0.18466 to 0.26471, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 13s 173ms/step - accuracy: 0.1562 - loss: 3.0350 - val_accuracy: 0.2647 - val_loss: 3.0281 Epoch 3/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.1981 - loss: 3.0008 Epoch 3: val_accuracy improved from 0.26471 to 0.28977, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 345s 9s/step - accuracy: 0.1987 - loss: 3.0002 - val_accuracy: 0.2898 - val_loss: 2.9268 Epoch 4/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:23 7s/step - accuracy: 0.2188 - loss: 2.9286 Epoch 4: val_accuracy did not improve from 0.28977 40/40 ━━━━━━━━━━━━━━━━━━━━ 11s 101ms/step - accuracy: 0.2188 - loss: 2.9286 - val_accuracy: 0.2647 - val_loss: 2.9245 Epoch 5/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.3239 - loss: 2.9095 Epoch 5: val_accuracy improved from 0.28977 to 0.35795, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 340s 8s/step - accuracy: 0.3241 - loss: 2.9092 - val_accuracy: 0.3580 - val_loss: 2.8748 Epoch 6/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:56 6s/step - accuracy: 0.3438 - loss: 2.8800 Epoch 6: val_accuracy improved from 0.35795 to 0.41176, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 147ms/step - accuracy: 0.3438 - loss: 2.8800 - val_accuracy: 0.4118 - val_loss: 2.8793 Epoch 7/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.3991 - loss: 2.8673 Epoch 7: val_accuracy improved from 0.41176 to 0.46023, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 343s 8s/step - accuracy: 0.3997 - loss: 2.8672 - val_accuracy: 0.4602 - val_loss: 2.8506 Epoch 8/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:56 6s/step - accuracy: 0.4688 - loss: 2.8462 Epoch 8: val_accuracy did not improve from 0.46023 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 90ms/step - accuracy: 0.4688 - loss: 2.8462 - val_accuracy: 0.2941 - val_loss: 2.8561 Epoch 9/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4757 - loss: 2.8448 Epoch 9: val_accuracy improved from 0.46023 to 0.50710, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.4760 - loss: 2.8447 - val_accuracy: 0.5071 - val_loss: 2.8354 Epoch 10/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:01 6s/step - accuracy: 0.5312 - loss: 2.8394 Epoch 10: val_accuracy improved from 0.50710 to 0.58824, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 140ms/step - accuracy: 0.5312 - loss: 2.8394 - val_accuracy: 0.5882 - val_loss: 2.8389 Epoch 11/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5162 - loss: 2.8327 Epoch 11: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.5164 - loss: 2.8326 - val_accuracy: 0.5185 - val_loss: 2.8238 Epoch 12/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:13 6s/step - accuracy: 0.5312 - loss: 2.8164 Epoch 12: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 90ms/step - accuracy: 0.5312 - loss: 2.8164 - val_accuracy: 0.4412 - val_loss: 2.8324 Epoch 13/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5300 - loss: 2.8204 Epoch 13: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 343s 8s/step - accuracy: 0.5298 - loss: 2.8203 - val_accuracy: 0.5241 - val_loss: 2.8141 Epoch 14/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:14 7s/step - accuracy: 0.5000 - loss: 2.8074 Epoch 14: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 98ms/step - accuracy: 0.5000 - loss: 2.8074 - val_accuracy: 0.4118 - val_loss: 2.8212 Epoch 15/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5559 - loss: 2.8102 Epoch 15: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.5555 - loss: 2.8102 - val_accuracy: 0.5526 - val_loss: 2.8066 Epoch 16/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:59 6s/step - accuracy: 0.5000 - loss: 2.8009 Epoch 16: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 95ms/step - accuracy: 0.5000 - loss: 2.8009 - val_accuracy: 0.5882 - val_loss: 2.8188 Epoch 17/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5608 - loss: 2.8000 Epoch 17: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.5605 - loss: 2.8000 - val_accuracy: 0.5426 - val_loss: 2.7993 Epoch 18/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:00 6s/step - accuracy: 0.5625 - loss: 2.7925 Epoch 18: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 85ms/step - accuracy: 0.5625 - loss: 2.7925 - val_accuracy: 0.5000 - val_loss: 2.8058 Epoch 19/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5456 - loss: 2.7949 Epoch 19: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.5460 - loss: 2.7949 - val_accuracy: 0.5582 - val_loss: 2.7939 Epoch 20/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:09 6s/step - accuracy: 0.5156 - loss: 2.7990 Epoch 20: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 87ms/step - accuracy: 0.5156 - loss: 2.7990 - val_accuracy: 0.4706 - val_loss: 2.7889 Epoch 20: early stopping Restoring model weights from the end of the best epoch: 10. Tiempo transcurrido para el entrenamiento: 3517.071481704712 segundos Uso de CPU durante el entrenamiento: 45.900000000000006% Aumento en uso de memoria: 0.386474609375 GB Resultados para lr=0.0005, l2=0.05, batch_size=64: Tiempo: 3517.071481704712 segundos, CPU: 45.900000000000006%, Memoria: 0.386474609375 GB Precisión de validación: 0.5882353186607361 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_34"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_17 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.1224 - loss: 3.0702 Epoch 1: val_accuracy improved from -inf to 0.37636, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 357s 2s/step - accuracy: 0.1229 - loss: 3.0695 - val_accuracy: 0.3764 - val_loss: 2.8578 Epoch 2/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:28 2s/step - accuracy: 0.3750 - loss: 2.8611 Epoch 2: val_accuracy improved from 0.37636 to 0.50000, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 4s 16ms/step - accuracy: 0.3750 - loss: 2.8611 - val_accuracy: 0.5000 - val_loss: 2.8441 Epoch 3/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.4033 - loss: 2.8455 Epoch 3: val_accuracy improved from 0.50000 to 0.51223, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 354s 2s/step - accuracy: 0.4035 - loss: 2.8454 - val_accuracy: 0.5122 - val_loss: 2.8205 Epoch 4/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:13 2s/step - accuracy: 0.3750 - loss: 2.8013 Epoch 4: val_accuracy did not improve from 0.51223 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.3750 - loss: 2.8013 - val_accuracy: 0.5000 - val_loss: 2.8679 Epoch 5/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5420 - loss: 2.8140 Epoch 5: val_accuracy improved from 0.51223 to 0.55707, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 357s 2s/step - accuracy: 0.5420 - loss: 2.8140 - val_accuracy: 0.5571 - val_loss: 2.8003 Epoch 6/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 5:18 2s/step - accuracy: 0.6875 - loss: 2.7869 Epoch 6: val_accuracy did not improve from 0.55707 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.6875 - loss: 2.7869 - val_accuracy: 0.5000 - val_loss: 2.7053 Epoch 7/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5787 - loss: 2.7938 Epoch 7: val_accuracy improved from 0.55707 to 0.57337, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 360s 2s/step - accuracy: 0.5787 - loss: 2.7937 - val_accuracy: 0.5734 - val_loss: 2.7830 Epoch 8/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:08 2s/step - accuracy: 0.3750 - loss: 2.7758 Epoch 8: val_accuracy did not improve from 0.57337 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.3750 - loss: 2.7758 - val_accuracy: 0.5000 - val_loss: 2.7883 Epoch 9/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5754 - loss: 2.7747 Epoch 9: val_accuracy did not improve from 0.57337 163/163 ━━━━━━━━━━━━━━━━━━━━ 350s 2s/step - accuracy: 0.5753 - loss: 2.7747 - val_accuracy: 0.5584 - val_loss: 2.7701 Epoch 10/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:15 2s/step - accuracy: 0.4375 - loss: 2.7617 Epoch 10: val_accuracy improved from 0.57337 to 1.00000, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 4s 14ms/step - accuracy: 0.4375 - loss: 2.7617 - val_accuracy: 1.0000 - val_loss: 2.7330 Epoch 11/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5818 - loss: 2.7604 Epoch 11: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 353s 2s/step - accuracy: 0.5819 - loss: 2.7604 - val_accuracy: 0.5720 - val_loss: 2.7556 Epoch 12/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.7500 - loss: 2.7402 Epoch 12: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.7500 - loss: 2.7402 - val_accuracy: 1.0000 - val_loss: 2.8124 Epoch 13/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5780 - loss: 2.7522 Epoch 13: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 351s 2s/step - accuracy: 0.5780 - loss: 2.7522 - val_accuracy: 0.5842 - val_loss: 2.7451 Epoch 14/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:13 2s/step - accuracy: 0.6250 - loss: 2.7075 Epoch 14: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.6250 - loss: 2.7075 - val_accuracy: 0.5000 - val_loss: 2.8552 Epoch 15/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5934 - loss: 2.7420 Epoch 15: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 350s 2s/step - accuracy: 0.5935 - loss: 2.7419 - val_accuracy: 0.5734 - val_loss: 2.7335 Epoch 16/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.6250 - loss: 2.7029 Epoch 16: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.6250 - loss: 2.7029 - val_accuracy: 0.5000 - val_loss: 2.7628 Epoch 17/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5761 - loss: 2.7330 Epoch 17: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 359s 2s/step - accuracy: 0.5761 - loss: 2.7330 - val_accuracy: 0.5856 - val_loss: 2.7300 Epoch 18/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:13 2s/step - accuracy: 0.6875 - loss: 2.7521 Epoch 18: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.6875 - loss: 2.7521 - val_accuracy: 1.0000 - val_loss: 2.6692 Epoch 19/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5901 - loss: 2.7168 Epoch 19: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 351s 2s/step - accuracy: 0.5900 - loss: 2.7168 - val_accuracy: 0.5897 - val_loss: 2.7216 Epoch 20/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:13 2s/step - accuracy: 0.3750 - loss: 2.7470 Epoch 20: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.3750 - loss: 2.7470 - val_accuracy: 0.5000 - val_loss: 2.9133 Epoch 20: early stopping Restoring model weights from the end of the best epoch: 10. Tiempo transcurrido para el entrenamiento: 3565.7795617580414 segundos Uso de CPU durante el entrenamiento: 45.599999999999994% Aumento en uso de memoria: -0.4730491638183594 GB Resultados para lr=0.0005, l2=0.1, batch_size=16: Tiempo: 3565.7795617580414 segundos, CPU: 45.599999999999994%, Memoria: -0.4730491638183594 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_36"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_18 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.1433 - loss: 3.1293 Epoch 1: val_accuracy improved from -inf to 0.33967, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 352s 4s/step - accuracy: 0.1442 - loss: 3.1283 - val_accuracy: 0.3397 - val_loss: 2.9191 Epoch 2/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:03 3s/step - accuracy: 0.3750 - loss: 2.9157 Epoch 2: val_accuracy improved from 0.33967 to 0.50000, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 6s 35ms/step - accuracy: 0.3750 - loss: 2.9157 - val_accuracy: 0.5000 - val_loss: 2.9243 Epoch 3/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.4088 - loss: 2.8918 Epoch 3: val_accuracy did not improve from 0.50000 81/81 ━━━━━━━━━━━━━━━━━━━━ 344s 4s/step - accuracy: 0.4090 - loss: 2.8915 - val_accuracy: 0.4633 - val_loss: 2.8461 Epoch 4/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:12 3s/step - accuracy: 0.3793 - loss: 2.8483 Epoch 4: val_accuracy improved from 0.50000 to 1.00000, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 5s 29ms/step - accuracy: 0.3793 - loss: 2.8483 - val_accuracy: 1.0000 - val_loss: 2.8113 Epoch 5/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.4844 - loss: 2.8378 Epoch 5: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 344s 4s/step - accuracy: 0.4845 - loss: 2.8377 - val_accuracy: 0.4946 - val_loss: 2.8223 Epoch 6/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:07 3s/step - accuracy: 0.4688 - loss: 2.8319 Epoch 6: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.4688 - loss: 2.8319 - val_accuracy: 0.5000 - val_loss: 2.7750 Epoch 7/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5468 - loss: 2.8173 Epoch 7: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 349s 4s/step - accuracy: 0.5466 - loss: 2.8173 - val_accuracy: 0.5299 - val_loss: 2.8081 Epoch 8/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:19 3s/step - accuracy: 0.5000 - loss: 2.8177 Epoch 8: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 6ms/step - accuracy: 0.5000 - loss: 2.8177 - val_accuracy: 0.5000 - val_loss: 2.7218 Epoch 9/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5403 - loss: 2.8016 Epoch 9: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 343s 4s/step - accuracy: 0.5405 - loss: 2.8015 - val_accuracy: 0.5353 - val_loss: 2.7943 Epoch 10/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:06 3s/step - accuracy: 0.4688 - loss: 2.7928 Epoch 10: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.4688 - loss: 2.7928 - val_accuracy: 1.0000 - val_loss: 2.8047 Epoch 11/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5486 - loss: 2.7937 Epoch 11: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 349s 4s/step - accuracy: 0.5486 - loss: 2.7936 - val_accuracy: 0.5516 - val_loss: 2.7839 Epoch 12/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:17 3s/step - accuracy: 0.6562 - loss: 2.7864 Epoch 12: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - accuracy: 0.6562 - loss: 2.7864 - val_accuracy: 1.0000 - val_loss: 2.7364 Epoch 13/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5737 - loss: 2.7767 Epoch 13: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 343s 4s/step - accuracy: 0.5737 - loss: 2.7767 - val_accuracy: 0.5530 - val_loss: 2.7738 Epoch 14/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:26 3s/step - accuracy: 0.6562 - loss: 2.7798 Epoch 14: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - accuracy: 0.6562 - loss: 2.7798 - val_accuracy: 0.5000 - val_loss: 2.8315 Epoch 14: early stopping Restoring model weights from the end of the best epoch: 4. Tiempo transcurrido para el entrenamiento: 2456.2960608005524 segundos Uso de CPU durante el entrenamiento: 49.599999999999994% Aumento en uso de memoria: -0.13779067993164062 GB Resultados para lr=0.0005, l2=0.1, batch_size=32: Tiempo: 2456.2960608005524 segundos, CPU: 49.599999999999994%, Memoria: -0.13779067993164062 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_38"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_19 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.0843 - loss: 3.1845 Epoch 1: val_accuracy improved from -inf to 0.16335, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 345s 8s/step - accuracy: 0.0848 - loss: 3.1832 - val_accuracy: 0.1634 - val_loss: 3.0301 Epoch 2/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:09 6s/step - accuracy: 0.1719 - loss: 3.0297 Epoch 2: val_accuracy did not improve from 0.16335 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 90ms/step - accuracy: 0.1719 - loss: 3.0297 - val_accuracy: 0.1471 - val_loss: 3.0253 Epoch 3/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.2015 - loss: 2.9939 Epoch 3: val_accuracy improved from 0.16335 to 0.32386, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 343s 8s/step - accuracy: 0.2021 - loss: 2.9932 - val_accuracy: 0.3239 - val_loss: 2.9209 Epoch 4/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:07 6s/step - accuracy: 0.2031 - loss: 2.9261 Epoch 4: val_accuracy did not improve from 0.32386 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 99ms/step - accuracy: 0.2031 - loss: 2.9261 - val_accuracy: 0.2353 - val_loss: 2.9240 Epoch 5/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.3370 - loss: 2.9046 Epoch 5: val_accuracy improved from 0.32386 to 0.39915, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 340s 8s/step - accuracy: 0.3376 - loss: 2.9042 - val_accuracy: 0.3991 - val_loss: 2.8710 Epoch 6/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:07 6s/step - accuracy: 0.3438 - loss: 2.8682 Epoch 6: val_accuracy did not improve from 0.39915 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 95ms/step - accuracy: 0.3438 - loss: 2.8682 - val_accuracy: 0.2353 - val_loss: 2.8756 Epoch 7/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4249 - loss: 2.8615 Epoch 7: val_accuracy improved from 0.39915 to 0.43040, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 347s 9s/step - accuracy: 0.4249 - loss: 2.8613 - val_accuracy: 0.4304 - val_loss: 2.8443 Epoch 8/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:25 7s/step - accuracy: 0.5156 - loss: 2.8395 Epoch 8: val_accuracy improved from 0.43040 to 0.47059, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 14s 189ms/step - accuracy: 0.5156 - loss: 2.8395 - val_accuracy: 0.4706 - val_loss: 2.8478 Epoch 9/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.4604 - loss: 2.8401 Epoch 9: val_accuracy did not improve from 0.47059 40/40 ━━━━━━━━━━━━━━━━━━━━ 361s 9s/step - accuracy: 0.4605 - loss: 2.8400 - val_accuracy: 0.4645 - val_loss: 2.8300 Epoch 10/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:33 7s/step - accuracy: 0.5781 - loss: 2.8216 Epoch 10: val_accuracy improved from 0.47059 to 0.50000, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 13s 165ms/step - accuracy: 0.5781 - loss: 2.8216 - val_accuracy: 0.5000 - val_loss: 2.8279 Epoch 11/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.4707 - loss: 2.8250 Epoch 11: val_accuracy did not improve from 0.50000 40/40 ━━━━━━━━━━━━━━━━━━━━ 360s 9s/step - accuracy: 0.4712 - loss: 2.8250 - val_accuracy: 0.4787 - val_loss: 2.8187 Epoch 12/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:09 6s/step - accuracy: 0.4844 - loss: 2.8225 Epoch 12: val_accuracy improved from 0.50000 to 0.52941, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 13s 163ms/step - accuracy: 0.4844 - loss: 2.8225 - val_accuracy: 0.5294 - val_loss: 2.8061 Epoch 13/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5207 - loss: 2.8128 Epoch 13: val_accuracy did not improve from 0.52941 40/40 ━━━━━━━━━━━━━━━━━━━━ 345s 9s/step - accuracy: 0.5201 - loss: 2.8127 - val_accuracy: 0.4858 - val_loss: 2.8107 Epoch 14/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:32 7s/step - accuracy: 0.4688 - loss: 2.8220 Epoch 14: val_accuracy improved from 0.52941 to 0.55882, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 13s 160ms/step - accuracy: 0.4688 - loss: 2.8220 - val_accuracy: 0.5588 - val_loss: 2.8144 Epoch 15/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5127 - loss: 2.8049 Epoch 15: val_accuracy did not improve from 0.55882 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.5126 - loss: 2.8048 - val_accuracy: 0.5142 - val_loss: 2.8010 Epoch 16/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:58 6s/step - accuracy: 0.5156 - loss: 2.7943 Epoch 16: val_accuracy did not improve from 0.55882 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 87ms/step - accuracy: 0.5156 - loss: 2.7943 - val_accuracy: 0.4706 - val_loss: 2.8339 Epoch 17/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5338 - loss: 2.7985 Epoch 17: val_accuracy did not improve from 0.55882 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.5334 - loss: 2.7985 - val_accuracy: 0.5114 - val_loss: 2.7965 Epoch 18/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:00 6s/step - accuracy: 0.5000 - loss: 2.7846 Epoch 18: val_accuracy did not improve from 0.55882 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 86ms/step - accuracy: 0.5000 - loss: 2.7846 - val_accuracy: 0.4118 - val_loss: 2.7912 Epoch 19/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5354 - loss: 2.7922 Epoch 19: val_accuracy did not improve from 0.55882 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.5352 - loss: 2.7921 - val_accuracy: 0.5128 - val_loss: 2.7891 Epoch 20/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:10 6s/step - accuracy: 0.5312 - loss: 2.7828 Epoch 20: val_accuracy did not improve from 0.55882 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 86ms/step - accuracy: 0.5312 - loss: 2.7828 - val_accuracy: 0.5294 - val_loss: 2.7834 Epoch 21/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5534 - loss: 2.7845 Epoch 21: val_accuracy did not improve from 0.55882 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.5528 - loss: 2.7845 - val_accuracy: 0.5497 - val_loss: 2.7817 Epoch 22/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:59 6s/step - accuracy: 0.5625 - loss: 2.7924 Epoch 22: val_accuracy improved from 0.55882 to 0.58824, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 13s 179ms/step - accuracy: 0.5625 - loss: 2.7924 - val_accuracy: 0.5882 - val_loss: 2.7930 Epoch 23/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5506 - loss: 2.7798 Epoch 23: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 346s 9s/step - accuracy: 0.5504 - loss: 2.7798 - val_accuracy: 0.5355 - val_loss: 2.7753 Epoch 24/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:12 6s/step - accuracy: 0.5938 - loss: 2.7751 Epoch 24: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 89ms/step - accuracy: 0.5938 - loss: 2.7751 - val_accuracy: 0.5882 - val_loss: 2.7581 Epoch 25/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5620 - loss: 2.7731 Epoch 25: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 339s 8s/step - accuracy: 0.5617 - loss: 2.7731 - val_accuracy: 0.5369 - val_loss: 2.7733 Epoch 26/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:12 6s/step - accuracy: 0.6094 - loss: 2.7689 Epoch 26: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 92ms/step - accuracy: 0.6094 - loss: 2.7689 - val_accuracy: 0.5588 - val_loss: 2.7548 Epoch 27/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5503 - loss: 2.7677 Epoch 27: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.5507 - loss: 2.7676 - val_accuracy: 0.5753 - val_loss: 2.7673 Epoch 28/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:56 6s/step - accuracy: 0.6250 - loss: 2.7754 Epoch 28: val_accuracy improved from 0.58824 to 0.64706, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 13s 171ms/step - accuracy: 0.6250 - loss: 2.7754 - val_accuracy: 0.6471 - val_loss: 2.7594 Epoch 29/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5500 - loss: 2.7661 Epoch 29: val_accuracy did not improve from 0.64706 40/40 ━━━━━━━━━━━━━━━━━━━━ 339s 8s/step - accuracy: 0.5501 - loss: 2.7659 - val_accuracy: 0.5540 - val_loss: 2.7627 Epoch 30/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:00 6s/step - accuracy: 0.5938 - loss: 2.7456 Epoch 30: val_accuracy did not improve from 0.64706 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 88ms/step - accuracy: 0.5938 - loss: 2.7456 - val_accuracy: 0.5882 - val_loss: 2.7517 Epoch 31/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5651 - loss: 2.7542 Epoch 31: val_accuracy did not improve from 0.64706 40/40 ━━━━━━━━━━━━━━━━━━━━ 342s 8s/step - accuracy: 0.5653 - loss: 2.7543 - val_accuracy: 0.5469 - val_loss: 2.7610 Epoch 32/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:09 6s/step - accuracy: 0.5625 - loss: 2.7462 Epoch 32: val_accuracy did not improve from 0.64706 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 87ms/step - accuracy: 0.5625 - loss: 2.7462 - val_accuracy: 0.6471 - val_loss: 2.7503 Epoch 33/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5727 - loss: 2.7534 Epoch 33: val_accuracy did not improve from 0.64706 40/40 ━━━━━━━━━━━━━━━━━━━━ 341s 8s/step - accuracy: 0.5730 - loss: 2.7534 - val_accuracy: 0.5895 - val_loss: 2.7562 Epoch 34/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:01 6s/step - accuracy: 0.6875 - loss: 2.7758 Epoch 34: val_accuracy did not improve from 0.64706 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 85ms/step - accuracy: 0.6875 - loss: 2.7758 - val_accuracy: 0.5294 - val_loss: 2.7355 Epoch 35/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.6055 - loss: 2.7458 Epoch 35: val_accuracy did not improve from 0.64706 40/40 ━━━━━━━━━━━━━━━━━━━━ 340s 8s/step - accuracy: 0.6053 - loss: 2.7459 - val_accuracy: 0.5881 - val_loss: 2.7495 Epoch 36/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:10 6s/step - accuracy: 0.5781 - loss: 2.7696 Epoch 36: val_accuracy improved from 0.64706 to 0.67647, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 139ms/step - accuracy: 0.5781 - loss: 2.7696 - val_accuracy: 0.6765 - val_loss: 2.7163 Epoch 37/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5875 - loss: 2.7434 Epoch 37: val_accuracy did not improve from 0.67647 40/40 ━━━━━━━━━━━━━━━━━━━━ 344s 9s/step - accuracy: 0.5875 - loss: 2.7434 - val_accuracy: 0.5710 - val_loss: 2.7425 Epoch 38/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:59 6s/step - accuracy: 0.6406 - loss: 2.7331 Epoch 38: val_accuracy did not improve from 0.67647 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 87ms/step - accuracy: 0.6406 - loss: 2.7331 - val_accuracy: 0.6471 - val_loss: 2.7340 Epoch 39/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5843 - loss: 2.7414 Epoch 39: val_accuracy did not improve from 0.67647 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.5843 - loss: 2.7413 - val_accuracy: 0.5795 - val_loss: 2.7380 Epoch 40/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:06 6s/step - accuracy: 0.6562 - loss: 2.7372 Epoch 40: val_accuracy did not improve from 0.67647 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 105ms/step - accuracy: 0.6562 - loss: 2.7372 - val_accuracy: 0.4706 - val_loss: 2.7795 Epoch 41/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.6131 - loss: 2.7393 Epoch 41: val_accuracy did not improve from 0.67647 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.6127 - loss: 2.7393 - val_accuracy: 0.5824 - val_loss: 2.7405 Epoch 42/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:00 6s/step - accuracy: 0.5000 - loss: 2.7298 Epoch 42: val_accuracy did not improve from 0.67647 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 93ms/step - accuracy: 0.5000 - loss: 2.7298 - val_accuracy: 0.3824 - val_loss: 2.7715 Epoch 43/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5961 - loss: 2.7364 Epoch 43: val_accuracy did not improve from 0.67647 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.5962 - loss: 2.7363 - val_accuracy: 0.5966 - val_loss: 2.7325 Epoch 44/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:00 6s/step - accuracy: 0.6250 - loss: 2.7003 Epoch 44: val_accuracy did not improve from 0.67647 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 87ms/step - accuracy: 0.6250 - loss: 2.7003 - val_accuracy: 0.4706 - val_loss: 2.7594 Epoch 45/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.6005 - loss: 2.7294 Epoch 45: val_accuracy did not improve from 0.67647 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.6004 - loss: 2.7294 - val_accuracy: 0.6108 - val_loss: 2.7358 Epoch 46/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:08 6s/step - accuracy: 0.6250 - loss: 2.7093 Epoch 46: val_accuracy did not improve from 0.67647 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 86ms/step - accuracy: 0.6250 - loss: 2.7093 - val_accuracy: 0.5588 - val_loss: 2.7139 Epoch 46: early stopping Restoring model weights from the end of the best epoch: 36. Tiempo transcurrido para el entrenamiento: 8119.172592163086 segundos Uso de CPU durante el entrenamiento: 35.099999999999994% Aumento en uso de memoria: 0.7940025329589844 GB Resultados para lr=0.0005, l2=0.1, batch_size=64: Tiempo: 8119.172592163086 segundos, CPU: 35.099999999999994%, Memoria: 0.7940025329589844 GB Precisión de validación: 0.6764705777168274 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_40"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_20 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.2410 - loss: 2.9945 Epoch 1: val_accuracy improved from -inf to 0.46332, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 360s 2s/step - accuracy: 0.2416 - loss: 2.9939 - val_accuracy: 0.4633 - val_loss: 2.8190 Epoch 2/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:56 2s/step - accuracy: 0.5625 - loss: 2.8076 Epoch 2: val_accuracy improved from 0.46332 to 1.00000, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 8s 41ms/step - accuracy: 0.5625 - loss: 2.8076 - val_accuracy: 1.0000 - val_loss: 2.8018 Epoch 3/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5013 - loss: 2.8078 Epoch 3: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 359s 2s/step - accuracy: 0.5015 - loss: 2.8077 - val_accuracy: 0.5312 - val_loss: 2.7812 Epoch 4/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:18 2s/step - accuracy: 0.2500 - loss: 2.7819 Epoch 4: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.2500 - loss: 2.7819 - val_accuracy: 0.0000e+00 - val_loss: 2.7367 Epoch 5/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5631 - loss: 2.7713 Epoch 5: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 351s 2s/step - accuracy: 0.5631 - loss: 2.7712 - val_accuracy: 0.5652 - val_loss: 2.7544 Epoch 6/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:20 2s/step - accuracy: 0.5000 - loss: 2.6942 Epoch 6: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.5000 - loss: 2.6942 - val_accuracy: 0.5000 - val_loss: 2.7598 Epoch 7/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5797 - loss: 2.7449 Epoch 7: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 352s 2s/step - accuracy: 0.5797 - loss: 2.7449 - val_accuracy: 0.5639 - val_loss: 2.7373 Epoch 8/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:13 2s/step - accuracy: 0.4375 - loss: 2.7295 Epoch 8: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.4375 - loss: 2.7295 - val_accuracy: 1.0000 - val_loss: 2.6059 Epoch 9/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5690 - loss: 2.7258 Epoch 9: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 353s 2s/step - accuracy: 0.5690 - loss: 2.7258 - val_accuracy: 0.5747 - val_loss: 2.7191 Epoch 10/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.6875 - loss: 2.7775 Epoch 10: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.6875 - loss: 2.7775 - val_accuracy: 0.5000 - val_loss: 2.5962 Epoch 11/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5812 - loss: 2.7101 Epoch 11: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 350s 2s/step - accuracy: 0.5811 - loss: 2.7101 - val_accuracy: 0.5707 - val_loss: 2.7060 Epoch 12/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.6250 - loss: 2.6854 Epoch 12: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.6250 - loss: 2.6854 - val_accuracy: 1.0000 - val_loss: 2.7129 Epoch 12: early stopping Restoring model weights from the end of the best epoch: 2. Tiempo transcurrido para el entrenamiento: 2144.4242074489594 segundos Uso de CPU durante el entrenamiento: 44.9% Aumento en uso de memoria: -1.9095535278320312 GB Resultados para lr=0.001, l2=0.01, batch_size=16: Tiempo: 2144.4242074489594 segundos, CPU: 44.9%, Memoria: -1.9095535278320312 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_42"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_21 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.1430 - loss: 3.0686 Epoch 1: val_accuracy improved from -inf to 0.45924, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 350s 4s/step - accuracy: 0.1445 - loss: 3.0674 - val_accuracy: 0.4592 - val_loss: 2.8516 Epoch 2/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:14 3s/step - accuracy: 0.5312 - loss: 2.8481 Epoch 2: val_accuracy improved from 0.45924 to 0.50000, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 5s 29ms/step - accuracy: 0.5312 - loss: 2.8481 - val_accuracy: 0.5000 - val_loss: 2.8618 Epoch 3/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5006 - loss: 2.8383 Epoch 3: val_accuracy improved from 0.50000 to 0.50408, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 351s 4s/step - accuracy: 0.5006 - loss: 2.8382 - val_accuracy: 0.5041 - val_loss: 2.8135 Epoch 4/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:08 3s/step - accuracy: 0.6250 - loss: 2.7998 Epoch 4: val_accuracy did not improve from 0.50408 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.6250 - loss: 2.7998 - val_accuracy: 0.5000 - val_loss: 2.7971 Epoch 5/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5311 - loss: 2.8064 Epoch 5: val_accuracy did not improve from 0.50408 81/81 ━━━━━━━━━━━━━━━━━━━━ 342s 4s/step - accuracy: 0.5311 - loss: 2.8063 - val_accuracy: 0.5027 - val_loss: 2.7916 Epoch 6/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:11 3s/step - accuracy: 0.5312 - loss: 2.7970 Epoch 6: val_accuracy did not improve from 0.50408 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.5312 - loss: 2.7970 - val_accuracy: 0.5000 - val_loss: 2.7789 Epoch 7/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5280 - loss: 2.7826 Epoch 7: val_accuracy improved from 0.50408 to 0.53261, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 348s 4s/step - accuracy: 0.5279 - loss: 2.7825 - val_accuracy: 0.5326 - val_loss: 2.7700 Epoch 8/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 3:59 3s/step - accuracy: 0.6875 - loss: 2.7776 Epoch 8: val_accuracy did not improve from 0.53261 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.6875 - loss: 2.7776 - val_accuracy: 0.5000 - val_loss: 2.8851 Epoch 9/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5491 - loss: 2.7641 Epoch 9: val_accuracy improved from 0.53261 to 0.55299, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 344s 4s/step - accuracy: 0.5492 - loss: 2.7640 - val_accuracy: 0.5530 - val_loss: 2.7579 Epoch 10/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:02 3s/step - accuracy: 0.6562 - loss: 2.7607 Epoch 10: val_accuracy did not improve from 0.55299 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.6562 - loss: 2.7607 - val_accuracy: 0.0000e+00 - val_loss: 2.7720 Epoch 11/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5883 - loss: 2.7457 Epoch 11: val_accuracy did not improve from 0.55299 81/81 ━━━━━━━━━━━━━━━━━━━━ 344s 4s/step - accuracy: 0.5880 - loss: 2.7457 - val_accuracy: 0.5516 - val_loss: 2.7449 Epoch 12/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:13 3s/step - accuracy: 0.6562 - loss: 2.7552 Epoch 12: val_accuracy did not improve from 0.55299 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.6562 - loss: 2.7552 - val_accuracy: 0.5000 - val_loss: 2.7393 Epoch 13/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5890 - loss: 2.7356 Epoch 13: val_accuracy improved from 0.55299 to 0.55435, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 343s 4s/step - accuracy: 0.5889 - loss: 2.7356 - val_accuracy: 0.5543 - val_loss: 2.7323 Epoch 14/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:03 3s/step - accuracy: 0.5625 - loss: 2.7383 Epoch 14: val_accuracy did not improve from 0.55435 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.5625 - loss: 2.7383 - val_accuracy: 0.5000 - val_loss: 2.8997 Epoch 15/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5677 - loss: 2.7240 Epoch 15: val_accuracy did not improve from 0.55435 81/81 ━━━━━━━━━━━━━━━━━━━━ 346s 4s/step - accuracy: 0.5676 - loss: 2.7240 - val_accuracy: 0.5462 - val_loss: 2.7267 Epoch 16/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:06 3s/step - accuracy: 0.6875 - loss: 2.7204 Epoch 16: val_accuracy improved from 0.55435 to 1.00000, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 5s 29ms/step - accuracy: 0.6875 - loss: 2.7204 - val_accuracy: 1.0000 - val_loss: 2.7205 Epoch 17/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5803 - loss: 2.7142 Epoch 17: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 353s 4s/step - accuracy: 0.5801 - loss: 2.7142 - val_accuracy: 0.5774 - val_loss: 2.7145 Epoch 18/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:17 3s/step - accuracy: 0.6250 - loss: 2.7172 Epoch 18: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - accuracy: 0.6250 - loss: 2.7172 - val_accuracy: 0.5000 - val_loss: 2.7914 Epoch 19/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5718 - loss: 2.7077 Epoch 19: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 342s 4s/step - accuracy: 0.5717 - loss: 2.7076 - val_accuracy: 0.5489 - val_loss: 2.7039 Epoch 20/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:07 3s/step - accuracy: 0.5938 - loss: 2.6725 Epoch 20: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.5938 - loss: 2.6725 - val_accuracy: 1.0000 - val_loss: 2.6707 Epoch 21/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5690 - loss: 2.7081 Epoch 21: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 342s 4s/step - accuracy: 0.5691 - loss: 2.7080 - val_accuracy: 0.5611 - val_loss: 2.7035 Epoch 22/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:06 3s/step - accuracy: 0.4062 - loss: 2.6477 Epoch 22: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.4062 - loss: 2.6477 - val_accuracy: 1.0000 - val_loss: 2.6926 Epoch 23/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5670 - loss: 2.6927 Epoch 23: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 342s 4s/step - accuracy: 0.5669 - loss: 2.6927 - val_accuracy: 0.5367 - val_loss: 2.6929 Epoch 24/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:24 3s/step - accuracy: 0.6562 - loss: 2.6953 Epoch 24: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 5ms/step - accuracy: 0.6562 - loss: 2.6953 - val_accuracy: 1.0000 - val_loss: 2.8263 Epoch 25/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5528 - loss: 2.6859 Epoch 25: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 343s 4s/step - accuracy: 0.5529 - loss: 2.6859 - val_accuracy: 0.5530 - val_loss: 2.6935 Epoch 26/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:02 3s/step - accuracy: 0.5625 - loss: 2.7589 Epoch 26: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.5625 - loss: 2.7589 - val_accuracy: 0.5000 - val_loss: 2.7743 Epoch 26: early stopping Restoring model weights from the end of the best epoch: 16. Tiempo transcurrido para el entrenamiento: 4539.860846281052 segundos Uso de CPU durante el entrenamiento: 52.2% Aumento en uso de memoria: 0.4283866882324219 GB Resultados para lr=0.001, l2=0.01, batch_size=32: Tiempo: 4539.860846281052 segundos, CPU: 52.2%, Memoria: 0.4283866882324219 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_44"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_22 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.0829 - loss: 3.1401 Epoch 1: val_accuracy improved from -inf to 0.25142, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 345s 8s/step - accuracy: 0.0842 - loss: 3.1380 - val_accuracy: 0.2514 - val_loss: 2.9238 Epoch 2/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:53 6s/step - accuracy: 0.1406 - loss: 2.9289 Epoch 2: val_accuracy improved from 0.25142 to 0.32353, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 13s 182ms/step - accuracy: 0.1406 - loss: 2.9289 - val_accuracy: 0.3235 - val_loss: 2.9157 Epoch 3/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.3350 - loss: 2.8944 Epoch 3: val_accuracy improved from 0.32353 to 0.38210, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 348s 9s/step - accuracy: 0.3358 - loss: 2.8940 - val_accuracy: 0.3821 - val_loss: 2.8500 Epoch 4/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:57 6s/step - accuracy: 0.3594 - loss: 2.8457 Epoch 4: val_accuracy improved from 0.38210 to 0.41176, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 147ms/step - accuracy: 0.3594 - loss: 2.8457 - val_accuracy: 0.4118 - val_loss: 2.8446 Epoch 5/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4298 - loss: 2.8407 Epoch 5: val_accuracy improved from 0.41176 to 0.45597, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 340s 8s/step - accuracy: 0.4302 - loss: 2.8405 - val_accuracy: 0.4560 - val_loss: 2.8257 Epoch 6/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:01 6s/step - accuracy: 0.3906 - loss: 2.8352 Epoch 6: val_accuracy improved from 0.45597 to 0.50000, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 149ms/step - accuracy: 0.3906 - loss: 2.8352 - val_accuracy: 0.5000 - val_loss: 2.8217 Epoch 7/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4776 - loss: 2.8180 Epoch 7: val_accuracy did not improve from 0.50000 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.4776 - loss: 2.8179 - val_accuracy: 0.4645 - val_loss: 2.8090 Epoch 8/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:13 6s/step - accuracy: 0.3906 - loss: 2.8047 Epoch 8: val_accuracy did not improve from 0.50000 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 87ms/step - accuracy: 0.3906 - loss: 2.8047 - val_accuracy: 0.4412 - val_loss: 2.8013 Epoch 9/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5102 - loss: 2.8008 Epoch 9: val_accuracy did not improve from 0.50000 40/40 ━━━━━━━━━━━━━━━━━━━━ 339s 8s/step - accuracy: 0.5098 - loss: 2.8007 - val_accuracy: 0.4560 - val_loss: 2.7948 Epoch 10/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:06 6s/step - accuracy: 0.5469 - loss: 2.7997 Epoch 10: val_accuracy improved from 0.50000 to 0.55882, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 153ms/step - accuracy: 0.5469 - loss: 2.7997 - val_accuracy: 0.5588 - val_loss: 2.7888 Epoch 11/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.4947 - loss: 2.7890 Epoch 11: val_accuracy did not improve from 0.55882 40/40 ━━━━━━━━━━━━━━━━━━━━ 342s 8s/step - accuracy: 0.4947 - loss: 2.7889 - val_accuracy: 0.4773 - val_loss: 2.7821 Epoch 12/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:01 6s/step - accuracy: 0.5156 - loss: 2.7857 Epoch 12: val_accuracy did not improve from 0.55882 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 100ms/step - accuracy: 0.5156 - loss: 2.7857 - val_accuracy: 0.4412 - val_loss: 2.7594 Epoch 13/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5052 - loss: 2.7768 Epoch 13: val_accuracy did not improve from 0.55882 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.5055 - loss: 2.7767 - val_accuracy: 0.4957 - val_loss: 2.7713 Epoch 14/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:56 6s/step - accuracy: 0.4688 - loss: 2.7815 Epoch 14: val_accuracy did not improve from 0.55882 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 95ms/step - accuracy: 0.4688 - loss: 2.7815 - val_accuracy: 0.5294 - val_loss: 2.7916 Epoch 15/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5211 - loss: 2.7668 Epoch 15: val_accuracy did not improve from 0.55882 40/40 ━━━━━━━━━━━━━━━━━━━━ 335s 8s/step - accuracy: 0.5211 - loss: 2.7667 - val_accuracy: 0.5142 - val_loss: 2.7636 Epoch 16/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:04 6s/step - accuracy: 0.5000 - loss: 2.7537 Epoch 16: val_accuracy did not improve from 0.55882 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 86ms/step - accuracy: 0.5000 - loss: 2.7537 - val_accuracy: 0.5294 - val_loss: 2.7599 Epoch 17/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5537 - loss: 2.7565 Epoch 17: val_accuracy did not improve from 0.55882 40/40 ━━━━━━━━━━━━━━━━━━━━ 344s 8s/step - accuracy: 0.5533 - loss: 2.7565 - val_accuracy: 0.5298 - val_loss: 2.7556 Epoch 18/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:58 6s/step - accuracy: 0.4531 - loss: 2.7510 Epoch 18: val_accuracy improved from 0.55882 to 0.61765, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 11s 136ms/step - accuracy: 0.4531 - loss: 2.7510 - val_accuracy: 0.6176 - val_loss: 2.7415 Epoch 19/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5439 - loss: 2.7472 Epoch 19: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.5444 - loss: 2.7472 - val_accuracy: 0.5724 - val_loss: 2.7459 Epoch 20/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:13 6s/step - accuracy: 0.5938 - loss: 2.7413 Epoch 20: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 85ms/step - accuracy: 0.5938 - loss: 2.7413 - val_accuracy: 0.5000 - val_loss: 2.7415 Epoch 21/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5968 - loss: 2.7417 Epoch 21: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 335s 8s/step - accuracy: 0.5966 - loss: 2.7416 - val_accuracy: 0.5710 - val_loss: 2.7404 Epoch 22/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:54 6s/step - accuracy: 0.5625 - loss: 2.7420 Epoch 22: val_accuracy improved from 0.61765 to 0.67647, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 158ms/step - accuracy: 0.5625 - loss: 2.7420 - val_accuracy: 0.6765 - val_loss: 2.7059 Epoch 23/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5954 - loss: 2.7328 Epoch 23: val_accuracy did not improve from 0.67647 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.5951 - loss: 2.7328 - val_accuracy: 0.5625 - val_loss: 2.7332 Epoch 24/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:00 6s/step - accuracy: 0.5938 - loss: 2.7315 Epoch 24: val_accuracy did not improve from 0.67647 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 87ms/step - accuracy: 0.5938 - loss: 2.7315 - val_accuracy: 0.5588 - val_loss: 2.7574 Epoch 25/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5745 - loss: 2.7290 Epoch 25: val_accuracy did not improve from 0.67647 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.5751 - loss: 2.7289 - val_accuracy: 0.5724 - val_loss: 2.7269 Epoch 26/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:14 7s/step - accuracy: 0.5156 - loss: 2.7731 Epoch 26: val_accuracy improved from 0.67647 to 0.73529, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 13s 165ms/step - accuracy: 0.5156 - loss: 2.7731 - val_accuracy: 0.7353 - val_loss: 2.7444 Epoch 27/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5991 - loss: 2.7192 Epoch 27: val_accuracy did not improve from 0.73529 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.5988 - loss: 2.7192 - val_accuracy: 0.5611 - val_loss: 2.7259 Epoch 28/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:11 6s/step - accuracy: 0.6406 - loss: 2.6833 Epoch 28: val_accuracy did not improve from 0.73529 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 89ms/step - accuracy: 0.6406 - loss: 2.6833 - val_accuracy: 0.6176 - val_loss: 2.7019 Epoch 29/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5795 - loss: 2.7138 Epoch 29: val_accuracy did not improve from 0.73529 40/40 ━━━━━━━━━━━━━━━━━━━━ 345s 9s/step - accuracy: 0.5800 - loss: 2.7138 - val_accuracy: 0.5739 - val_loss: 2.7150 Epoch 30/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:09 6s/step - accuracy: 0.5781 - loss: 2.7321 Epoch 30: val_accuracy did not improve from 0.73529 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 91ms/step - accuracy: 0.5781 - loss: 2.7321 - val_accuracy: 0.5588 - val_loss: 2.7272 Epoch 31/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5716 - loss: 2.7138 Epoch 31: val_accuracy did not improve from 0.73529 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.5721 - loss: 2.7136 - val_accuracy: 0.5739 - val_loss: 2.7135 Epoch 32/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:08 6s/step - accuracy: 0.7031 - loss: 2.7111 Epoch 32: val_accuracy did not improve from 0.73529 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 93ms/step - accuracy: 0.7031 - loss: 2.7111 - val_accuracy: 0.5588 - val_loss: 2.7103 Epoch 33/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5900 - loss: 2.7034 Epoch 33: val_accuracy did not improve from 0.73529 40/40 ━━━━━━━━━━━━━━━━━━━━ 344s 9s/step - accuracy: 0.5898 - loss: 2.7034 - val_accuracy: 0.5682 - val_loss: 2.7070 Epoch 34/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:12 6s/step - accuracy: 0.5469 - loss: 2.6961 Epoch 34: val_accuracy did not improve from 0.73529 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 88ms/step - accuracy: 0.5469 - loss: 2.6961 - val_accuracy: 0.4412 - val_loss: 2.7177 Epoch 35/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.6035 - loss: 2.6925 Epoch 35: val_accuracy did not improve from 0.73529 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.6033 - loss: 2.6927 - val_accuracy: 0.5767 - val_loss: 2.7066 Epoch 36/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:00 6s/step - accuracy: 0.6562 - loss: 2.7126 Epoch 36: val_accuracy did not improve from 0.73529 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 91ms/step - accuracy: 0.6562 - loss: 2.7126 - val_accuracy: 0.5000 - val_loss: 2.7033 Epoch 36: early stopping Restoring model weights from the end of the best epoch: 26. Tiempo transcurrido para el entrenamiento: 6308.311935424805 segundos Uso de CPU durante el entrenamiento: 48.3% Aumento en uso de memoria: 0.6409111022949219 GB Resultados para lr=0.001, l2=0.01, batch_size=64: Tiempo: 6308.311935424805 segundos, CPU: 48.3%, Memoria: 0.6409111022949219 GB Precisión de validación: 0.7352941036224365 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_46"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_23 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.2191 - loss: 2.9961 Epoch 1: val_accuracy improved from -inf to 0.46060, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 361s 2s/step - accuracy: 0.2198 - loss: 2.9955 - val_accuracy: 0.4606 - val_loss: 2.8177 Epoch 2/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.3750 - loss: 2.8064 Epoch 2: val_accuracy improved from 0.46060 to 1.00000, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 4s 16ms/step - accuracy: 0.3750 - loss: 2.8064 - val_accuracy: 1.0000 - val_loss: 2.7709 Epoch 3/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.4738 - loss: 2.8050 Epoch 3: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 350s 2s/step - accuracy: 0.4739 - loss: 2.8049 - val_accuracy: 0.5245 - val_loss: 2.7783 Epoch 4/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:15 2s/step - accuracy: 0.5625 - loss: 2.7841 Epoch 4: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.5625 - loss: 2.7841 - val_accuracy: 0.5000 - val_loss: 2.7385 Epoch 5/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5787 - loss: 2.7689 Epoch 5: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 350s 2s/step - accuracy: 0.5786 - loss: 2.7689 - val_accuracy: 0.5883 - val_loss: 2.7545 Epoch 6/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:18 2s/step - accuracy: 0.4375 - loss: 2.7306 Epoch 6: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.4375 - loss: 2.7306 - val_accuracy: 1.0000 - val_loss: 2.6417 Epoch 7/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5796 - loss: 2.7461 Epoch 7: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 358s 2s/step - accuracy: 0.5795 - loss: 2.7461 - val_accuracy: 0.5652 - val_loss: 2.7348 Epoch 8/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:08 2s/step - accuracy: 0.5625 - loss: 2.7722 Epoch 8: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.5625 - loss: 2.7722 - val_accuracy: 0.0000e+00 - val_loss: 2.8733 Epoch 9/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5674 - loss: 2.7292 Epoch 9: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 349s 2s/step - accuracy: 0.5674 - loss: 2.7292 - val_accuracy: 0.5693 - val_loss: 2.7180 Epoch 10/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:18 2s/step - accuracy: 0.6250 - loss: 2.7167 Epoch 10: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.6250 - loss: 2.7167 - val_accuracy: 0.5000 - val_loss: 2.9166 Epoch 11/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5753 - loss: 2.7112 Epoch 11: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 351s 2s/step - accuracy: 0.5753 - loss: 2.7112 - val_accuracy: 0.5788 - val_loss: 2.7057 Epoch 12/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:10 2s/step - accuracy: 0.6875 - loss: 2.6683 Epoch 12: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.6875 - loss: 2.6683 - val_accuracy: 0.0000e+00 - val_loss: 2.6676 Epoch 12: early stopping Restoring model weights from the end of the best epoch: 2. Tiempo transcurrido para el entrenamiento: 2133.9292175769806 segundos Uso de CPU durante el entrenamiento: 23.400000000000006% Aumento en uso de memoria: -1.5922889709472656 GB Resultados para lr=0.001, l2=0.05, batch_size=16: Tiempo: 2133.9292175769806 segundos, CPU: 23.400000000000006%, Memoria: -1.5922889709472656 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_48"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_24 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.1535 - loss: 3.0690 Epoch 1: val_accuracy improved from -inf to 0.46332, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 346s 4s/step - accuracy: 0.1549 - loss: 3.0677 - val_accuracy: 0.4633 - val_loss: 2.8484 Epoch 2/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:06 3s/step - accuracy: 0.5312 - loss: 2.8454 Epoch 2: val_accuracy did not improve from 0.46332 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.5312 - loss: 2.8454 - val_accuracy: 0.0000e+00 - val_loss: 2.8535 Epoch 3/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.4983 - loss: 2.8365 Epoch 3: val_accuracy improved from 0.46332 to 0.52717, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 345s 4s/step - accuracy: 0.4985 - loss: 2.8364 - val_accuracy: 0.5272 - val_loss: 2.8099 Epoch 4/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:07 3s/step - accuracy: 0.5312 - loss: 2.8100 Epoch 4: val_accuracy improved from 0.52717 to 1.00000, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 5s 25ms/step - accuracy: 0.5312 - loss: 2.8100 - val_accuracy: 1.0000 - val_loss: 2.7326 Epoch 5/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5669 - loss: 2.8028 Epoch 5: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 349s 4s/step - accuracy: 0.5669 - loss: 2.8027 - val_accuracy: 0.5462 - val_loss: 2.7868 Epoch 6/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:06 3s/step - accuracy: 0.6562 - loss: 2.7736 Epoch 6: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.6562 - loss: 2.7736 - val_accuracy: 1.0000 - val_loss: 2.8604 Epoch 7/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5434 - loss: 2.7790 Epoch 7: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 343s 4s/step - accuracy: 0.5434 - loss: 2.7789 - val_accuracy: 0.5652 - val_loss: 2.7702 Epoch 8/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:07 3s/step - accuracy: 0.4375 - loss: 2.7979 Epoch 8: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.4375 - loss: 2.7979 - val_accuracy: 0.5000 - val_loss: 2.8309 Epoch 9/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5694 - loss: 2.7590 Epoch 9: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 349s 4s/step - accuracy: 0.5693 - loss: 2.7590 - val_accuracy: 0.5476 - val_loss: 2.7542 Epoch 10/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:27 3s/step - accuracy: 0.5625 - loss: 2.7778 Epoch 10: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 4ms/step - accuracy: 0.5625 - loss: 2.7778 - val_accuracy: 0.5000 - val_loss: 2.8150 Epoch 11/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5744 - loss: 2.7443 Epoch 11: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 341s 4s/step - accuracy: 0.5744 - loss: 2.7443 - val_accuracy: 0.5666 - val_loss: 2.7399 Epoch 12/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:06 3s/step - accuracy: 0.6250 - loss: 2.7217 Epoch 12: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.6250 - loss: 2.7217 - val_accuracy: 0.5000 - val_loss: 2.6831 Epoch 13/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5828 - loss: 2.7313 Epoch 13: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 341s 4s/step - accuracy: 0.5826 - loss: 2.7313 - val_accuracy: 0.5476 - val_loss: 2.7285 Epoch 14/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:06 3s/step - accuracy: 0.5938 - loss: 2.7162 Epoch 14: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.5938 - loss: 2.7162 - val_accuracy: 0.0000e+00 - val_loss: 2.7508 Epoch 14: early stopping Restoring model weights from the end of the best epoch: 4. Tiempo transcurrido para el entrenamiento: 2439.8384866714478 segundos Uso de CPU durante el entrenamiento: 58.2% Aumento en uso de memoria: 0.38370513916015625 GB Resultados para lr=0.001, l2=0.05, batch_size=32: Tiempo: 2439.8384866714478 segundos, CPU: 58.2%, Memoria: 0.38370513916015625 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_50"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_25 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.1017 - loss: 3.1400 Epoch 1: val_accuracy improved from -inf to 0.25284, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 350s 8s/step - accuracy: 0.1030 - loss: 3.1379 - val_accuracy: 0.2528 - val_loss: 2.9239 Epoch 2/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:06 6s/step - accuracy: 0.2188 - loss: 2.9276 Epoch 2: val_accuracy improved from 0.25284 to 0.26471, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 13s 163ms/step - accuracy: 0.2188 - loss: 2.9276 - val_accuracy: 0.2647 - val_loss: 2.9167 Epoch 3/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.3108 - loss: 2.8949 Epoch 3: val_accuracy improved from 0.26471 to 0.42188, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 340s 8s/step - accuracy: 0.3116 - loss: 2.8944 - val_accuracy: 0.4219 - val_loss: 2.8506 Epoch 4/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:52 6s/step - accuracy: 0.2969 - loss: 2.8544 Epoch 4: val_accuracy did not improve from 0.42188 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 89ms/step - accuracy: 0.2969 - loss: 2.8544 - val_accuracy: 0.2941 - val_loss: 2.8535 Epoch 5/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.4368 - loss: 2.8404 Epoch 5: val_accuracy improved from 0.42188 to 0.48438, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.4369 - loss: 2.8403 - val_accuracy: 0.4844 - val_loss: 2.8264 Epoch 6/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:00 6s/step - accuracy: 0.4531 - loss: 2.8301 Epoch 6: val_accuracy did not improve from 0.48438 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 84ms/step - accuracy: 0.4531 - loss: 2.8301 - val_accuracy: 0.4706 - val_loss: 2.8229 Epoch 7/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.4905 - loss: 2.8186 Epoch 7: val_accuracy improved from 0.48438 to 0.49148, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 345s 9s/step - accuracy: 0.4907 - loss: 2.8185 - val_accuracy: 0.4915 - val_loss: 2.8092 Epoch 8/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:55 6s/step - accuracy: 0.4844 - loss: 2.8042 Epoch 8: val_accuracy did not improve from 0.49148 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 86ms/step - accuracy: 0.4844 - loss: 2.8042 - val_accuracy: 0.3529 - val_loss: 2.8226 Epoch 9/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.4842 - loss: 2.8043 Epoch 9: val_accuracy improved from 0.49148 to 0.53409, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 341s 8s/step - accuracy: 0.4849 - loss: 2.8042 - val_accuracy: 0.5341 - val_loss: 2.7952 Epoch 10/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:08 6s/step - accuracy: 0.6094 - loss: 2.8036 Epoch 10: val_accuracy did not improve from 0.53409 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 93ms/step - accuracy: 0.6094 - loss: 2.8036 - val_accuracy: 0.3235 - val_loss: 2.7911 Epoch 11/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5406 - loss: 2.7876 Epoch 11: val_accuracy improved from 0.53409 to 0.55540, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 343s 8s/step - accuracy: 0.5409 - loss: 2.7875 - val_accuracy: 0.5554 - val_loss: 2.7837 Epoch 12/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:58 6s/step - accuracy: 0.5000 - loss: 2.7782 Epoch 12: val_accuracy improved from 0.55540 to 0.58824, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 143ms/step - accuracy: 0.5000 - loss: 2.7782 - val_accuracy: 0.5882 - val_loss: 2.7957 Epoch 13/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5638 - loss: 2.7793 Epoch 13: val_accuracy did not improve from 0.58824 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.5639 - loss: 2.7792 - val_accuracy: 0.5483 - val_loss: 2.7736 Epoch 14/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:03 6s/step - accuracy: 0.6250 - loss: 2.7555 Epoch 14: val_accuracy improved from 0.58824 to 0.64706, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 149ms/step - accuracy: 0.6250 - loss: 2.7555 - val_accuracy: 0.6471 - val_loss: 2.7598 Epoch 15/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5790 - loss: 2.7669 Epoch 15: val_accuracy did not improve from 0.64706 40/40 ━━━━━━━━━━━━━━━━━━━━ 339s 8s/step - accuracy: 0.5789 - loss: 2.7669 - val_accuracy: 0.5568 - val_loss: 2.7644 Epoch 16/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:07 6s/step - accuracy: 0.5625 - loss: 2.7486 Epoch 16: val_accuracy improved from 0.64706 to 0.67647, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 13s 161ms/step - accuracy: 0.5625 - loss: 2.7486 - val_accuracy: 0.6765 - val_loss: 2.7629 Epoch 17/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5464 - loss: 2.7587 Epoch 17: val_accuracy did not improve from 0.67647 40/40 ━━━━━━━━━━━━━━━━━━━━ 339s 8s/step - accuracy: 0.5469 - loss: 2.7587 - val_accuracy: 0.5554 - val_loss: 2.7559 Epoch 18/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:01 6s/step - accuracy: 0.5938 - loss: 2.7471 Epoch 18: val_accuracy did not improve from 0.67647 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 87ms/step - accuracy: 0.5938 - loss: 2.7471 - val_accuracy: 0.5588 - val_loss: 2.7149 Epoch 19/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5663 - loss: 2.7485 Epoch 19: val_accuracy did not improve from 0.67647 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.5661 - loss: 2.7485 - val_accuracy: 0.5398 - val_loss: 2.7512 Epoch 20/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:17 7s/step - accuracy: 0.4688 - loss: 2.7092 Epoch 20: val_accuracy did not improve from 0.67647 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 87ms/step - accuracy: 0.4688 - loss: 2.7092 - val_accuracy: 0.6471 - val_loss: 2.7499 Epoch 21/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5758 - loss: 2.7409 Epoch 21: val_accuracy did not improve from 0.67647 40/40 ━━━━━━━━━━━━━━━━━━━━ 343s 8s/step - accuracy: 0.5761 - loss: 2.7409 - val_accuracy: 0.5966 - val_loss: 2.7392 Epoch 22/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:12 6s/step - accuracy: 0.5781 - loss: 2.7550 Epoch 22: val_accuracy did not improve from 0.67647 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 90ms/step - accuracy: 0.5781 - loss: 2.7550 - val_accuracy: 0.4706 - val_loss: 2.7560 Epoch 23/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5915 - loss: 2.7395 Epoch 23: val_accuracy did not improve from 0.67647 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.5918 - loss: 2.7394 - val_accuracy: 0.5724 - val_loss: 2.7343 Epoch 24/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:56 6s/step - accuracy: 0.5312 - loss: 2.7318 Epoch 24: val_accuracy did not improve from 0.67647 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 91ms/step - accuracy: 0.5312 - loss: 2.7318 - val_accuracy: 0.6176 - val_loss: 2.7428 Epoch 25/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.6086 - loss: 2.7255 Epoch 25: val_accuracy did not improve from 0.67647 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.6084 - loss: 2.7256 - val_accuracy: 0.6009 - val_loss: 2.7290 Epoch 26/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:07 6s/step - accuracy: 0.5938 - loss: 2.7093 Epoch 26: val_accuracy did not improve from 0.67647 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 85ms/step - accuracy: 0.5938 - loss: 2.7093 - val_accuracy: 0.5294 - val_loss: 2.7332 Epoch 26: early stopping Restoring model weights from the end of the best epoch: 16. Tiempo transcurrido para el entrenamiento: 4564.721148252487 segundos Uso de CPU durante el entrenamiento: 43.800000000000004% Aumento en uso de memoria: 0.6228561401367188 GB Resultados para lr=0.001, l2=0.05, batch_size=64: Tiempo: 4564.721148252487 segundos, CPU: 43.800000000000004%, Memoria: 0.6228561401367188 GB Precisión de validación: 0.6764705777168274 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_52"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_26 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.2899 - loss: 2.9905 Epoch 1: val_accuracy improved from -inf to 0.48098, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 355s 2s/step - accuracy: 0.2904 - loss: 2.9899 - val_accuracy: 0.4810 - val_loss: 2.8158 Epoch 2/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:23 2s/step - accuracy: 0.5000 - loss: 2.8190 Epoch 2: val_accuracy improved from 0.48098 to 0.50000, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 4s 16ms/step - accuracy: 0.5000 - loss: 2.8190 - val_accuracy: 0.5000 - val_loss: 2.7869 Epoch 3/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5144 - loss: 2.8026 Epoch 3: val_accuracy improved from 0.50000 to 0.57201, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 354s 2s/step - accuracy: 0.5146 - loss: 2.8026 - val_accuracy: 0.5720 - val_loss: 2.7791 Epoch 4/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:08 2s/step - accuracy: 0.5000 - loss: 2.7909 Epoch 4: val_accuracy improved from 0.57201 to 1.00000, saving model to best_model.keras 163/163 ━━━━━━━━━━━━━━━━━━━━ 4s 15ms/step - accuracy: 0.5000 - loss: 2.7909 - val_accuracy: 1.0000 - val_loss: 2.8206 Epoch 5/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5720 - loss: 2.7674 Epoch 5: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 356s 2s/step - accuracy: 0.5720 - loss: 2.7674 - val_accuracy: 0.6073 - val_loss: 2.7542 Epoch 6/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:18 2s/step - accuracy: 0.5625 - loss: 2.7952 Epoch 6: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.5625 - loss: 2.7952 - val_accuracy: 1.0000 - val_loss: 2.7404 Epoch 7/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5985 - loss: 2.7395 Epoch 7: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 358s 2s/step - accuracy: 0.5984 - loss: 2.7395 - val_accuracy: 0.5707 - val_loss: 2.7336 Epoch 8/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 3:32 1s/step - accuracy: 0.6154 - loss: 2.6923 Epoch 8: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.6154 - loss: 2.6923 - val_accuracy: 0.5000 - val_loss: 2.5569 Epoch 9/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5940 - loss: 2.7259 Epoch 9: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 351s 2s/step - accuracy: 0.5939 - loss: 2.7259 - val_accuracy: 0.5693 - val_loss: 2.7175 Epoch 10/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:18 2s/step - accuracy: 0.6250 - loss: 2.7904 Epoch 10: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.6250 - loss: 2.7904 - val_accuracy: 1.0000 - val_loss: 2.5673 Epoch 11/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5523 - loss: 2.7149 Epoch 11: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 350s 2s/step - accuracy: 0.5523 - loss: 2.7148 - val_accuracy: 0.5639 - val_loss: 2.7057 Epoch 12/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:08 2s/step - accuracy: 0.6250 - loss: 2.6571 Epoch 12: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.6250 - loss: 2.6571 - val_accuracy: 0.5000 - val_loss: 2.6314 Epoch 13/50 163/163 ━━━━━━━━━━━━━━━━━━━━ 0s 2s/step - accuracy: 0.5554 - loss: 2.6929 Epoch 13: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 349s 2s/step - accuracy: 0.5554 - loss: 2.6929 - val_accuracy: 0.5707 - val_loss: 2.6924 Epoch 14/50 1/163 ━━━━━━━━━━━━━━━━━━━━ 4:13 2s/step - accuracy: 0.6875 - loss: 2.5849 Epoch 14: val_accuracy did not improve from 1.00000 163/163 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - accuracy: 0.6875 - loss: 2.5849 - val_accuracy: 0.5000 - val_loss: 2.6987 Epoch 14: early stopping Restoring model weights from the end of the best epoch: 4. Tiempo transcurrido para el entrenamiento: 2492.0556807518005 segundos Uso de CPU durante el entrenamiento: 52.4% Aumento en uso de memoria: -0.22751235961914062 GB Resultados para lr=0.001, l2=0.1, batch_size=16: Tiempo: 2492.0556807518005 segundos, CPU: 52.4%, Memoria: -0.22751235961914062 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_54"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_27 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.1336 - loss: 3.0745 Epoch 1: val_accuracy improved from -inf to 0.45245, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 350s 4s/step - accuracy: 0.1350 - loss: 3.0732 - val_accuracy: 0.4524 - val_loss: 2.8566 Epoch 2/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 5:11 4s/step - accuracy: 0.5312 - loss: 2.8480 Epoch 2: val_accuracy improved from 0.45245 to 0.50000, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 7s 42ms/step - accuracy: 0.5312 - loss: 2.8480 - val_accuracy: 0.5000 - val_loss: 2.8530 Epoch 3/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.4663 - loss: 2.8439 Epoch 3: val_accuracy improved from 0.50000 to 0.51902, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 345s 4s/step - accuracy: 0.4665 - loss: 2.8438 - val_accuracy: 0.5190 - val_loss: 2.8174 Epoch 4/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 3:59 3s/step - accuracy: 0.5625 - loss: 2.8044 Epoch 4: val_accuracy improved from 0.51902 to 1.00000, saving model to best_model.keras 81/81 ━━━━━━━━━━━━━━━━━━━━ 5s 27ms/step - accuracy: 0.5625 - loss: 2.8044 - val_accuracy: 1.0000 - val_loss: 2.8197 Epoch 5/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5044 - loss: 2.8104 Epoch 5: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 341s 4s/step - accuracy: 0.5048 - loss: 2.8103 - val_accuracy: 0.5448 - val_loss: 2.7917 Epoch 6/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:07 3s/step - accuracy: 0.7500 - loss: 2.7953 Epoch 6: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.7500 - loss: 2.7953 - val_accuracy: 1.0000 - val_loss: 2.7195 Epoch 7/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5741 - loss: 2.7849 Epoch 7: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 349s 4s/step - accuracy: 0.5740 - loss: 2.7848 - val_accuracy: 0.5815 - val_loss: 2.7713 Epoch 8/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:09 3s/step - accuracy: 0.5000 - loss: 2.7673 Epoch 8: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.5000 - loss: 2.7673 - val_accuracy: 0.5000 - val_loss: 2.7556 Epoch 9/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5904 - loss: 2.7654 Epoch 9: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 345s 4s/step - accuracy: 0.5901 - loss: 2.7654 - val_accuracy: 0.5693 - val_loss: 2.7566 Epoch 10/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:04 3s/step - accuracy: 0.6562 - loss: 2.7819 Epoch 10: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.6562 - loss: 2.7819 - val_accuracy: 0.5000 - val_loss: 2.6378 Epoch 11/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5681 - loss: 2.7503 Epoch 11: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 342s 4s/step - accuracy: 0.5681 - loss: 2.7503 - val_accuracy: 0.5571 - val_loss: 2.7424 Epoch 12/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:37 3s/step - accuracy: 0.6562 - loss: 2.7293 Epoch 12: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 4s 5ms/step - accuracy: 0.6562 - loss: 2.7293 - val_accuracy: 0.5000 - val_loss: 2.8291 Epoch 13/50 81/81 ━━━━━━━━━━━━━━━━━━━━ 0s 3s/step - accuracy: 0.5689 - loss: 2.7411 Epoch 13: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 346s 4s/step - accuracy: 0.5688 - loss: 2.7410 - val_accuracy: 0.5598 - val_loss: 2.7334 Epoch 14/50 1/81 ━━━━━━━━━━━━━━━━━━━━ 4:01 3s/step - accuracy: 0.5625 - loss: 2.7240 Epoch 14: val_accuracy did not improve from 1.00000 81/81 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - accuracy: 0.5625 - loss: 2.7240 - val_accuracy: 0.5000 - val_loss: 2.5885 Epoch 14: early stopping Restoring model weights from the end of the best epoch: 4. Tiempo transcurrido para el entrenamiento: 2448.003697156906 segundos Uso de CPU durante el entrenamiento: 47.400000000000006% Aumento en uso de memoria: -0.4161796569824219 GB Resultados para lr=0.001, l2=0.1, batch_size=32: Tiempo: 2448.003697156906 segundos, CPU: 47.400000000000006%, Memoria: -0.4161796569824219 GB Precisión de validación: 1.0 Found 2621 images belonging to 18 classes. Found 738 images belonging to 18 classes.
Model: "functional_56"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_28 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 1000) │ 4,097,000 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_flatten (Flatten) │ (None, 1000) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ custom_dense (Dense) │ (None, 18) │ 18,018 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 138,375,562 (527.86 MB)
Trainable params: 18,018 (70.38 KB)
Non-trainable params: 138,357,544 (527.79 MB)
Epoch 1/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.1474 - loss: 3.1268 Epoch 1: val_accuracy improved from -inf to 0.42756, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 355s 8s/step - accuracy: 0.1495 - loss: 3.1247 - val_accuracy: 0.4276 - val_loss: 2.9106 Epoch 2/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:11 6s/step - accuracy: 0.4219 - loss: 2.9086 Epoch 2: val_accuracy did not improve from 0.42756 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 95ms/step - accuracy: 0.4219 - loss: 2.9086 - val_accuracy: 0.3529 - val_loss: 2.9115 Epoch 3/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.4548 - loss: 2.8814 Epoch 3: val_accuracy improved from 0.42756 to 0.51705, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 344s 9s/step - accuracy: 0.4556 - loss: 2.8810 - val_accuracy: 0.5170 - val_loss: 2.8368 Epoch 4/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:14 7s/step - accuracy: 0.5312 - loss: 2.8409 Epoch 4: val_accuracy did not improve from 0.51705 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 95ms/step - accuracy: 0.5312 - loss: 2.8409 - val_accuracy: 0.4118 - val_loss: 2.8473 Epoch 5/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5584 - loss: 2.8278 Epoch 5: val_accuracy improved from 0.51705 to 0.56534, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 347s 9s/step - accuracy: 0.5584 - loss: 2.8277 - val_accuracy: 0.5653 - val_loss: 2.8125 Epoch 6/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:56 6s/step - accuracy: 0.6094 - loss: 2.8230 Epoch 6: val_accuracy improved from 0.56534 to 0.61765, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 11s 139ms/step - accuracy: 0.6094 - loss: 2.8230 - val_accuracy: 0.6176 - val_loss: 2.8186 Epoch 7/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5597 - loss: 2.8078 Epoch 7: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 335s 8s/step - accuracy: 0.5601 - loss: 2.8077 - val_accuracy: 0.5767 - val_loss: 2.7965 Epoch 8/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:00 6s/step - accuracy: 0.6875 - loss: 2.7892 Epoch 8: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 88ms/step - accuracy: 0.6875 - loss: 2.7892 - val_accuracy: 0.4706 - val_loss: 2.8048 Epoch 9/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5921 - loss: 2.7917 Epoch 9: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.5920 - loss: 2.7916 - val_accuracy: 0.5682 - val_loss: 2.7835 Epoch 10/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:06 6s/step - accuracy: 0.7188 - loss: 2.7685 Epoch 10: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 84ms/step - accuracy: 0.7188 - loss: 2.7685 - val_accuracy: 0.5294 - val_loss: 2.8035 Epoch 11/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5887 - loss: 2.7812 Epoch 11: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 335s 8s/step - accuracy: 0.5886 - loss: 2.7810 - val_accuracy: 0.5795 - val_loss: 2.7718 Epoch 12/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:05 6s/step - accuracy: 0.5469 - loss: 2.7752 Epoch 12: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 89ms/step - accuracy: 0.5469 - loss: 2.7752 - val_accuracy: 0.6176 - val_loss: 2.7870 Epoch 13/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5894 - loss: 2.7666 Epoch 13: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 336s 8s/step - accuracy: 0.5891 - loss: 2.7665 - val_accuracy: 0.5639 - val_loss: 2.7619 Epoch 14/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:00 6s/step - accuracy: 0.5312 - loss: 2.7529 Epoch 14: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 9s 85ms/step - accuracy: 0.5312 - loss: 2.7529 - val_accuracy: 0.5000 - val_loss: 2.7929 Epoch 15/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5943 - loss: 2.7559 Epoch 15: val_accuracy did not improve from 0.61765 40/40 ━━━━━━━━━━━━━━━━━━━━ 338s 8s/step - accuracy: 0.5943 - loss: 2.7559 - val_accuracy: 0.5540 - val_loss: 2.7567 Epoch 16/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:04 6s/step - accuracy: 0.5156 - loss: 2.7750 Epoch 16: val_accuracy improved from 0.61765 to 0.64706, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 156ms/step - accuracy: 0.5156 - loss: 2.7750 - val_accuracy: 0.6471 - val_loss: 2.7465 Epoch 17/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5916 - loss: 2.7446 Epoch 17: val_accuracy did not improve from 0.64706 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.5912 - loss: 2.7446 - val_accuracy: 0.5597 - val_loss: 2.7449 Epoch 18/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:09 6s/step - accuracy: 0.4688 - loss: 2.7546 Epoch 18: val_accuracy did not improve from 0.64706 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 87ms/step - accuracy: 0.4688 - loss: 2.7546 - val_accuracy: 0.4412 - val_loss: 2.7612 Epoch 19/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5886 - loss: 2.7400 Epoch 19: val_accuracy did not improve from 0.64706 40/40 ━━━━━━━━━━━━━━━━━━━━ 349s 9s/step - accuracy: 0.5885 - loss: 2.7399 - val_accuracy: 0.5739 - val_loss: 2.7381 Epoch 20/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:59 6s/step - accuracy: 0.5938 - loss: 2.7454 Epoch 20: val_accuracy improved from 0.64706 to 0.73529, saving model to best_model.keras 40/40 ━━━━━━━━━━━━━━━━━━━━ 12s 138ms/step - accuracy: 0.5938 - loss: 2.7454 - val_accuracy: 0.7353 - val_loss: 2.7559 Epoch 21/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5903 - loss: 2.7311 Epoch 21: val_accuracy did not improve from 0.73529 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.5902 - loss: 2.7311 - val_accuracy: 0.5611 - val_loss: 2.7270 Epoch 22/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:27 7s/step - accuracy: 0.5156 - loss: 2.7277 Epoch 22: val_accuracy did not improve from 0.73529 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 87ms/step - accuracy: 0.5156 - loss: 2.7277 - val_accuracy: 0.5000 - val_loss: 2.7872 Epoch 23/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5701 - loss: 2.7228 Epoch 23: val_accuracy did not improve from 0.73529 40/40 ━━━━━━━━━━━━━━━━━━━━ 343s 8s/step - accuracy: 0.5701 - loss: 2.7228 - val_accuracy: 0.5568 - val_loss: 2.7326 Epoch 24/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:06 6s/step - accuracy: 0.6406 - loss: 2.7165 Epoch 24: val_accuracy did not improve from 0.73529 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 89ms/step - accuracy: 0.6406 - loss: 2.7165 - val_accuracy: 0.5588 - val_loss: 2.7455 Epoch 25/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5780 - loss: 2.7185 Epoch 25: val_accuracy did not improve from 0.73529 40/40 ━━━━━━━━━━━━━━━━━━━━ 340s 8s/step - accuracy: 0.5780 - loss: 2.7186 - val_accuracy: 0.5653 - val_loss: 2.7243 Epoch 26/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:05 6s/step - accuracy: 0.5469 - loss: 2.7212 Epoch 26: val_accuracy did not improve from 0.73529 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 88ms/step - accuracy: 0.5469 - loss: 2.7212 - val_accuracy: 0.4118 - val_loss: 2.7332 Epoch 27/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 7s/step - accuracy: 0.5562 - loss: 2.7153 Epoch 27: val_accuracy did not improve from 0.73529 40/40 ━━━━━━━━━━━━━━━━━━━━ 341s 8s/step - accuracy: 0.5566 - loss: 2.7153 - val_accuracy: 0.5710 - val_loss: 2.7169 Epoch 28/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 4:11 6s/step - accuracy: 0.4531 - loss: 2.7034 Epoch 28: val_accuracy did not improve from 0.73529 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 89ms/step - accuracy: 0.4531 - loss: 2.7034 - val_accuracy: 0.5000 - val_loss: 2.7461 Epoch 29/50 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 6s/step - accuracy: 0.5754 - loss: 2.7116 Epoch 29: val_accuracy did not improve from 0.73529 40/40 ━━━━━━━━━━━━━━━━━━━━ 337s 8s/step - accuracy: 0.5753 - loss: 2.7116 - val_accuracy: 0.5440 - val_loss: 2.7082 Epoch 30/50 1/40 ━━━━━━━━━━━━━━━━━━━━ 3:56 6s/step - accuracy: 0.5781 - loss: 2.6720 Epoch 30: val_accuracy did not improve from 0.73529 40/40 ━━━━━━━━━━━━━━━━━━━━ 10s 95ms/step - accuracy: 0.5781 - loss: 2.6720 - val_accuracy: 0.5882 - val_loss: 2.7078 Epoch 30: early stopping Restoring model weights from the end of the best epoch: 20. Tiempo transcurrido para el entrenamiento: 5268.132543802261 segundos Uso de CPU durante el entrenamiento: 56.89999999999999% Aumento en uso de memoria: 0.2124176025390625 GB Resultados para lr=0.001, l2=0.1, batch_size=64: Tiempo: 5268.132543802261 segundos, CPU: 56.89999999999999%, Memoria: 0.2124176025390625 GB Precisión de validación: 0.7352941036224365 Mejores hiperparámetros encontrados: {'learning_rate': 0.0001, 'l2_regularization': 0.01, 'batch_size': 16, 'val_accuracy': 1.0, 'elapsed_time': 4630.410501003265, 'cpu_usage': 46.2, 'memory_usage': -264822784}
graficas de entrenamiento y validación¶
import matplotlib.pyplot as plt
def plotTraining(hist):
epochs = range(len(hist.history['loss']))
plt.figure(figsize=(12, 10))
# Gráfico de la pérdida
plt.subplot(2, 1, 1)
plt.plot(epochs, hist.history['loss'], '-r', label='Pérdida del entrenamiento')
plt.plot(epochs, hist.history['val_loss'], '-b', label='Pérdida de validación')
plt.title('Pérdida del entrenamiento y validación')
plt.xlabel('Épocas', fontsize=14)
plt.ylabel('Pérdida', fontsize=14)
plt.legend()
plt.grid()
# Gráfico de la precisión
plt.subplot(2, 1, 2)
plt.plot(epochs, hist.history['accuracy'], '-r', label='Precisión del entrenamiento')
plt.plot(epochs, hist.history['val_accuracy'], '-b', label='Precisión de validación')
plt.title('Precisión del entrenamiento y validación')
plt.xlabel('Épocas', fontsize=14)
plt.ylabel('Precisión', fontsize=14)
plt.legend()
plt.grid()
plt.tight_layout()
plt.show()
# Uso de la función para graficar
plotTraining(model_history)
from sklearn.metrics import confusion_matrix, f1_score, roc_curve, precision_score, recall_score, accuracy_score, roc_auc_score
from sklearn import metrics
from mlxtend.plotting import plot_confusion_matrix
from keras.models import load_model
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
names = [ 'CATHARTES AURA', 'COEREBA FLAVEOLA', 'COLUMBA LIVIA', 'CORAGYPS ATRATUS', 'CROTOPHAGA SULCIROSTRIS', 'CYANOCORAX YNCAS', 'EGRETTA THULA', 'FALCO PEREGRINUS', 'FALCO SPARVERIUS', 'HIRUNDO RUSTICA', 'PANDION HALIAETUS', 'PILHERODIUS PILEATUS', 'PITANGUS SULPHURATUS', 'PYRRHOMYIAS CINNAMOMEUS', 'RYNCHOPS NIGER', 'SETOPHAGA FUSCA', 'SYNALLAXIS AZARAE', 'TYRANNUS MELANCHOLICUS']
test_data_dir = 'datasetpreprocesado/test'
test_datagen = ImageDataGenerator()
test_generator = test_datagen.flow_from_directory(
test_data_dir,
target_size=(width_shape, height_shape),
batch_size = batch_size,
class_mode='categorical',
shuffle=False)
custom_Model50= load_model('best_model.keras')
predictions = custom_Model50.predict(test_generator)
y_pred = np.argmax(predictions, axis=1)
y_real = test_generator.classes
matc=confusion_matrix(y_real, y_pred)
plot_confusion_matrix(conf_mat=matc, figsize=(9,9), class_names = names, show_normed=False)
plt.tight_layout()
print(metrics.classification_report(y_real,y_pred, digits = 4))
Found 361 images belonging to 18 classes.
C:\Users\Oscar Diaz\anaconda3\Lib\site-packages\keras\src\trainers\data_adapters\py_dataset_adapter.py:120: UserWarning: Your `PyDataset` class should call `super().__init__(**kwargs)` in its constructor. `**kwargs` can include `workers`, `use_multiprocessing`, `max_queue_size`. Do not pass these arguments to `fit()`, as they will be ignored. self._warn_if_super_not_called()
6/6 ━━━━━━━━━━━━━━━━━━━━ 42s 6s/step precision recall f1-score support 0 0.4783 0.5500 0.5116 20 1 0.4444 0.2000 0.2759 20 2 0.6000 0.4500 0.5143 20 3 0.6667 0.1000 0.1739 20 4 0.2812 0.4500 0.3462 20 5 0.1509 0.4000 0.2192 20 6 0.4872 0.9500 0.6441 20 7 0.1967 0.6000 0.2963 20 8 0.1111 0.0500 0.0690 20 9 0.0385 0.0500 0.0435 20 10 0.2500 0.0500 0.0833 20 11 0.2381 0.2500 0.2439 20 12 0.1250 0.0500 0.0714 20 13 0.0952 0.1000 0.0976 20 14 0.7778 0.7000 0.7368 20 15 0.0000 0.0000 0.0000 20 16 0.3125 0.2500 0.2778 20 17 0.5000 0.0476 0.0870 21 accuracy 0.2909 361 macro avg 0.3196 0.2915 0.2606 361 weighted avg 0.3201 0.2909 0.2602 361
from keras.applications.imagenet_utils import preprocess_input, decode_predictions
from keras.models import load_model
import cv2
names = [
'CATHARTES AURA', 'COEREBA FLAVEOLA', 'COLUMBA LIVIA', 'CORAGYPS ATRATUS',
'CROTOPHAGA SULCIROSTRIS', 'CYANOCORAX YNCAS', 'EGRETTA THULA', 'FALCO PEREGRINUS',
'FALCO SPARVERIUS', 'HIRUNDO RUSTICA', 'PANDION HALIAETUS', 'PILHERODIUS PILEATUS',
'PITANGUS SULPHURATUS', 'PYRRHOMYIAS CINNAMOMEUS', 'RYNCHOPS NIGER', 'SETOPHAGA FUSCA',
'SYNALLAXIS AZARAE', 'TYRANNUS MELANCHOLICUS']
# Cargar el modelo
modelt = load_model("models\entrenamiento.keras")
#modelt = custom_vgg_model
# Ruta de la imagen de prueba
imaget_path = 'output\CYANOCORAX YNCAS_CYANOCORAX YNCAS_1.jpg'
# Leer la imagen, cambiar tamaño y preprocesar
imaget=cv2.resize(cv2.imread(imaget_path), (width_shape, height_shape), interpolation = cv2.INTER_AREA)
xt = np.asarray(imaget)
xt=preprocess_input(xt)
xt = np.expand_dims(xt,axis=0)
# Obtener las predicciones del modelo
preds = modelt.predict(xt)
# Obtener la clase predicha y su porcentaje de confianza
predicted_class_index = np.argmax(preds)
predicted_class_name = names[predicted_class_index]
confidence_percentage = preds[0][predicted_class_index] * 100
# Imprimir el resultado
print(f'Clase predicha: {predicted_class_name}')
print(f'Porcentaje de confianza: {confidence_percentage:.2f}%')
# Mostrar la imagen
plt.imshow(cv2.cvtColor(np.asarray(imaget), cv2.COLOR_BGR2RGB))
plt.axis('off')
plt.show()
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 991ms/step Clase predicha: CYANOCORAX YNCAS Porcentaje de confianza: 98.90%
import os
import cv2
from keras.applications.imagenet_utils import preprocess_input
from keras.models import load_model
import numpy as np
import matplotlib.pyplot as plt
# Definir el tamaño de las imágenes
width_shape = 224
height_shape = 224
# Nombres de las clases
class_names = [
'CATHARTES AURA', 'COEREBA FLAVEOLA', 'COLUMBA LIVIA', 'CORAGYPS ATRATUS',
'CROTOPHAGA SULCIROSTRIS', 'CYANOCORAX YNCAS', 'EGRETTA THULA', 'FALCO PEREGRINUS',
'FALCO SPARVERIUS', 'HIRUNDO RUSTICA', 'PANDION HALIAETUS', 'PILHERODIUS PILEATUS',
'PITANGUS SULPHURATUS', 'PYRRHOMYIAS CINNAMOMEUS', 'RYNCHOPS NIGER', 'SETOPHAGA FUSCA',
'SYNALLAXIS AZARAE', 'TYRANNUS MELANCHOLICUS'
]
# Cargar el modelo
model = load_model("models/entrenamiento.keras")
# Directorio de salida
output_dir = "output"
# Obtener una lista de todas las imágenes en el directorio de salida
image_files = [f for f in os.listdir(output_dir) if os.path.isfile(os.path.join(output_dir, f))]
# Número de columnas por fila
num_cols = 5
# Número de imágenes procesadas
num_processed = 0
# Crear una nueva figura
plt.figure(figsize=(20, 20))
# Iterar sobre cada imagen
for image_file in image_files:
# Ruta completa de la imagen
image_path = os.path.join(output_dir, image_file)
# Leer la imagen, cambiar tamaño y preprocesar
image = cv2.imread(image_path)
image = cv2.resize(image, (width_shape, height_shape), interpolation=cv2.INTER_AREA)
image = np.asarray(image)
image = preprocess_input(image)
image = np.expand_dims(image, axis=0)
# Obtener las predicciones del modelo
preds = model.predict(image)
# Obtener la clase predicha y su porcentaje de confianza
predicted_class_index = np.argmax(preds)
confidence_percentage = preds[0][predicted_class_index] * 100
predicted_class_name = class_names[predicted_class_index]
# Mostrar la imagen y la predicción
plt.subplot((len(image_files) - 1) // num_cols + 1, num_cols, num_processed + 1)
plt.imshow(cv2.cvtColor(cv2.imread(image_path), cv2.COLOR_BGR2RGB))
plt.title(f'Clase: {predicted_class_index} - {predicted_class_name}\nNombre: {image_file}\nConfianza: {confidence_percentage:.2f}%')
plt.axis('off')
# Incrementar el contador de imágenes procesadas
num_processed += 1
# Ajustar el espacio entre subtramas y mostrar la figura
plt.tight_layout()
plt.show()
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 839ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 636ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 660ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 649ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 650ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 770ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 850ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 830ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 909ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 882ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 957ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 1s/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 1s/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 994ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 975ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 835ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 973ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 876ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 861ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 875ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 904ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 890ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 880ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 841ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 930ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 865ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 980ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 2s 2s/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 1s/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 1s/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 1s/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 1s/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 1s/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 910ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 875ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 850ms/step
import os
import cv2
from keras.applications.imagenet_utils import preprocess_input
from keras.models import load_model
import numpy as np
import matplotlib.pyplot as plt
# Definir el tamaño de las imágenes
width_shape = 224
height_shape = 224
# Nombres de las clases
class_names = [
'CATHARTES AURA', 'COEREBA FLAVEOLA', 'COLUMBA LIVIA', 'CORAGYPS ATRATUS',
'CROTOPHAGA SULCIROSTRIS', 'CYANOCORAX YNCAS', 'EGRETTA THULA', 'FALCO PEREGRINUS',
'FALCO SPARVERIUS', 'HIRUNDO RUSTICA', 'PANDION HALIAETUS', 'PILHERODIUS PILEATUS',
'PITANGUS SULPHURATUS', 'PYRRHOMYIAS CINNAMOMEUS', 'RYNCHOPS NIGER', 'SETOPHAGA FUSCA',
'SYNALLAXIS AZARAE', 'TYRANNUS MELANCHOLICUS'
]
# Cargar el modelo
model = load_model("models/optimizado.keras")
# Directorio de salida
output_dir = "output"
# Obtener una lista de todas las imágenes en el directorio de salida
image_files = [f for f in os.listdir(output_dir) if os.path.isfile(os.path.join(output_dir, f))]
# Número de columnas por fila
num_cols = 5
# Número de imágenes procesadas
num_processed = 0
# Crear una nueva figura
plt.figure(figsize=(20, 20))
# Iterar sobre cada imagen
for image_file in image_files:
# Ruta completa de la imagen
image_path = os.path.join(output_dir, image_file)
# Leer la imagen, cambiar tamaño y preprocesar
image = cv2.imread(image_path)
image = cv2.resize(image, (width_shape, height_shape), interpolation=cv2.INTER_AREA)
image = np.asarray(image)
image = preprocess_input(image)
image = np.expand_dims(image, axis=0)
# Obtener las predicciones del modelo
preds = model.predict(image)
# Obtener la clase predicha y su porcentaje de confianza
predicted_class_index = np.argmax(preds)
confidence_percentage = preds[0][predicted_class_index] * 100
predicted_class_name = class_names[predicted_class_index]
# Mostrar la imagen y la predicción
plt.subplot((len(image_files) - 1) // num_cols + 1, num_cols, num_processed + 1)
plt.imshow(cv2.cvtColor(cv2.imread(image_path), cv2.COLOR_BGR2RGB))
plt.title(f'Clase: {predicted_class_index} - {predicted_class_name}\nNombre: {image_file}\nConfianza: {confidence_percentage:.2f}%')
plt.axis('off')
# Incrementar el contador de imágenes procesadas
num_processed += 1
# Ajustar el espacio entre subtramas y mostrar la figura
plt.tight_layout()
plt.show()
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 848ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 621ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 626ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 629ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 621ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 670ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 679ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 663ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 804ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 868ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 756ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 1s/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 1s/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 875ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 848ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 809ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 819ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 956ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 1s/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 1s/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 781ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 803ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 807ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 804ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 825ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 995ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 960ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 957ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 958ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 1s/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 894ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 822ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 848ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 797ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 838ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 888ms/step
from tensorflow.keras.models import load_model
from tensorflow.keras.utils import plot_model
# Cargar el modelo
model = load_model("models/optimizado.keras")
# Guardar la representación gráfica de la arquitectura del modelo en un archivo
plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True)
C:\Users\DIAZOVIEDO\anaconda3\envs\TFMaves\lib\site-packages\keras\src\saving\saving_lib.py:396: UserWarning: Skipping variable loading for optimizer 'adam', because it has 58 variables whereas the saved optimizer has 6 variables. trackable.load_own_variables(weights_store.get(inner_path))
from tensorflow.keras.models import load_model
from tensorflow.keras.utils import plot_model
# Cargar el modelo
model = load_model("models/optimizado.keras")
# Generar un gráfico más compacto
plot_model(model, to_file='model_plot.png', show_shapes=False, show_layer_names=False)
C:\Users\DIAZOVIEDO\anaconda3\envs\TFMaves\lib\site-packages\keras\src\saving\saving_lib.py:396: UserWarning: Skipping variable loading for optimizer 'adam', because it has 58 variables whereas the saved optimizer has 6 variables. trackable.load_own_variables(weights_store.get(inner_path))
import tensorflow as tf
from tensorflow.keras.models import load_model
from IPython.display import Image
from graphviz import Digraph
# Cargar modelos optimizados y entrenados
model_entrenamiento = load_model("models/entrenamiento.keras")
model_optimizado = load_model("models/optimizado.keras")
# Generar diagramas de flujo para cada modelo
def generar_diagrama_flujo(modelo, nombre_modelo):
"""
Genera un diagrama de flujo de la arquitectura del modelo y lo muestra en Jupyter Notebook.
Args:
modelo: Instancia del modelo Keras.
nombre_modelo: Nombre del modelo para identificar el diagrama.
"""
# Convertir la arquitectura del modelo a formato Graphviz
dot_graph = tf.keras.utils.model_to_dot(modelo, show_shapes=True)
# Crear un objeto Digraph a partir del código Graphviz
graph = Digraph(nombre_modelo)
graph.from_dot(dot_graph)
# Personalizar el estilo del diagrama (opcional)
graph.node_attr['shape'] = 'box'
graph.edge_attr['style'] = 'arrowsize=2'
# Mostrar el diagrama en Jupyter Notebook
Image(graph)
# Generar diagrama de flujo para el modelo de entrenamiento
generar_diagrama_flujo(model_entrenamiento, "Modelo entrenamiento")
# Generar diagrama de flujo para el modelo optimizado
generar_diagrama_flujo(model_optimizado, "Modelo optimizado")
C:\Users\DIAZOVIEDO\anaconda3\envs\TFMaves\lib\site-packages\keras\src\saving\saving_lib.py:396: UserWarning: Skipping variable loading for optimizer 'adam', because it has 66 variables whereas the saved optimizer has 6 variables. trackable.load_own_variables(weights_store.get(inner_path))
--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[13], line 35 32 Image(graph) 34 # Generar diagrama de flujo para el modelo de entrenamiento ---> 35 generar_diagrama_flujo(model_entrenamiento, "Modelo entrenamiento") 37 # Generar diagrama de flujo para el modelo optimizado 38 generar_diagrama_flujo(model_optimizado, "Modelo optimizado") Cell In[13], line 25, in generar_diagrama_flujo(modelo, nombre_modelo) 23 # Crear un objeto Digraph a partir del código Graphviz 24 graph = Digraph(nombre_modelo) ---> 25 graph.from_dot(dot_graph) 27 # Personalizar el estilo del diagrama (opcional) 28 graph.node_attr['shape'] = 'box' AttributeError: 'Digraph' object has no attribute 'from_dot'
from tensorflow.keras.models import load_model
# Cargar el modelo
model = load_model("models/optimizado.keras")
# Exportar el modelo a formato ONNX
model.save('model.onnx')
# Abrir Netron y cargar el modelo ONNX
netron model.onnx
Cell In[8], line 10 netron model.onnx ^ SyntaxError: invalid syntax
from tensorflow.keras.models import load_model
from visdom import Visdom
# Cargar el modelo
model = load_model("models/optimizado.keras")
# Crear un objeto Visdom
vis = Visdom()
# Visualizar el modelo
vis.modelsummary(model, name='model')
--------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) Cell In[4], line 2 1 from tensorflow.keras.models import load_model ----> 2 from visdom import Visdom 4 # Cargar el modelo 5 model = load_model("models/optimizado.keras") ModuleNotFoundError: No module named 'visdom'